Part A: Convolutional Neural Network¶

__Previously__¶


  1. 128 x 128 Baseline Model was done
  2. Vgg_Model, CNN model 1 and CNN model 2 was decided as the best 3 models

__Import Libraries__¶


Here is to import all the necessary libraries

In [1]:
# !unzip 1.zip
In [2]:
%pip install tensorflow_addons keras-tuner pandas matplotlib seaborn scikit-learn tqdm efficientnet numba 
Requirement already satisfied: tensorflow_addons in /usr/local/lib/python3.8/dist-packages (0.21.0)
Requirement already satisfied: keras-tuner in /usr/local/lib/python3.8/dist-packages (1.4.6)
Requirement already satisfied: pandas in /usr/local/lib/python3.8/dist-packages (2.0.3)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.8/dist-packages (3.5.2)
Requirement already satisfied: seaborn in /usr/local/lib/python3.8/dist-packages (0.13.0)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.8/dist-packages (1.3.2)
Requirement already satisfied: tqdm in /usr/local/lib/python3.8/dist-packages (4.66.1)
Requirement already satisfied: efficientnet in /usr/local/lib/python3.8/dist-packages (1.1.1)
Requirement already satisfied: numba in /usr/local/lib/python3.8/dist-packages (0.58.1)
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from tensorflow_addons) (21.3)
Requirement already satisfied: typeguard<3.0.0,>=2.7 in /usr/local/lib/python3.8/dist-packages (from tensorflow_addons) (2.13.3)
Requirement already satisfied: keras in /usr/local/lib/python3.8/dist-packages (from keras-tuner) (2.9.0)
Requirement already satisfied: kt-legacy in /usr/local/lib/python3.8/dist-packages (from keras-tuner) (1.0.5)
Requirement already satisfied: requests in /usr/local/lib/python3.8/dist-packages (from keras-tuner) (2.31.0)
Requirement already satisfied: tzdata>=2022.1 in /usr/local/lib/python3.8/dist-packages (from pandas) (2023.3)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from pandas) (2.8.2)
Requirement already satisfied: numpy>=1.20.3; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from pandas) (1.22.3)
Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.8/dist-packages (from pandas) (2023.3.post1)
Requirement already satisfied: pillow>=6.2.0 in /usr/local/lib/python3.8/dist-packages (from matplotlib) (9.1.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.8/dist-packages (from matplotlib) (1.4.2)
Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.8/dist-packages (from matplotlib) (4.33.3)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.8/dist-packages (from matplotlib) (0.11.0)
Requirement already satisfied: pyparsing>=2.2.1 in /usr/local/lib/python3.8/dist-packages (from matplotlib) (3.0.9)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.8/dist-packages (from scikit-learn) (3.2.0)
Requirement already satisfied: scipy>=1.5.0 in /usr/local/lib/python3.8/dist-packages (from scikit-learn) (1.10.1)
Requirement already satisfied: joblib>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from scikit-learn) (1.3.2)
Requirement already satisfied: keras-applications<=1.0.8,>=1.0.7 in /usr/local/lib/python3.8/dist-packages (from efficientnet) (1.0.8)
Requirement already satisfied: scikit-image in /usr/local/lib/python3.8/dist-packages (from efficientnet) (0.21.0)
Requirement already satisfied: llvmlite<0.42,>=0.41.0dev0 in /usr/local/lib/python3.8/dist-packages (from numba) (0.41.1)
Requirement already satisfied: importlib-metadata; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from numba) (4.11.3)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/lib/python3/dist-packages (from requests->keras-tuner) (1.25.8)
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.8/dist-packages (from requests->keras-tuner) (3.3.2)
Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python3/dist-packages (from requests->keras-tuner) (2019.11.28)
Requirement already satisfied: idna<4,>=2.5 in /usr/lib/python3/dist-packages (from requests->keras-tuner) (2.8)
Requirement already satisfied: six>=1.5 in /usr/lib/python3/dist-packages (from python-dateutil>=2.8.2->pandas) (1.14.0)
Requirement already satisfied: h5py in /usr/local/lib/python3.8/dist-packages (from keras-applications<=1.0.8,>=1.0.7->efficientnet) (3.6.0)
Requirement already satisfied: networkx>=2.8 in /usr/local/lib/python3.8/dist-packages (from scikit-image->efficientnet) (3.1)
Requirement already satisfied: imageio>=2.27 in /usr/local/lib/python3.8/dist-packages (from scikit-image->efficientnet) (2.33.0)
Requirement already satisfied: lazy_loader>=0.2 in /usr/local/lib/python3.8/dist-packages (from scikit-image->efficientnet) (0.3)
Requirement already satisfied: tifffile>=2022.8.12 in /usr/local/lib/python3.8/dist-packages (from scikit-image->efficientnet) (2023.7.10)
Requirement already satisfied: PyWavelets>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from scikit-image->efficientnet) (1.4.1)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.8/dist-packages (from importlib-metadata; python_version < "3.9"->numba) (3.8.0)
WARNING: You are using pip version 20.2.4; however, version 23.3.1 is available.
You should consider upgrading via the '/usr/bin/python3 -m pip install --upgrade pip' command.
Note: you may need to restart the kernel to use updated packages.
In [3]:
# !sudo nvidia-smi --gpu-reset -i 0

Other Imports¶

In [4]:
import numpy as np
import pandas as pd

import seaborn as sns

from matplotlib import pyplot as plt

from sklearn.metrics import classification_report, roc_auc_score, average_precision_score, confusion_matrix, cohen_kappa_score, matthews_corrcoef
from sklearn.metrics import classification_report,accuracy_score, confusion_matrix
from sklearn.decomposition import PCA
from sklearn.preprocessing import Normalizer
from sklearn.metrics import classification_report, accuracy_score

import os, time, math, datetime, warnings, pytz, glob
from IPython.display import display
from functools import reduce
import absl.logging
from tqdm import tqdm
import logging
from efficientnet.tfkeras import EfficientNetB3

absl.logging.set_verbosity(absl.logging.ERROR)
logging.getLogger('tensorflow').disabled = True
warnings.filterwarnings('ignore')

Tensorflow Import¶

In [5]:
import tensorflow as tf
from tensorflow.keras.utils import Sequence, to_categorical
from tensorflow import expand_dims
from tensorflow.keras import Sequential
from tensorflow.keras import layers as L
from tensorflow.keras import backend as K
from tensorflow.image import random_flip_left_right, random_crop, resize_with_crop_or_pad
from tensorflow.keras.utils import to_categorical, plot_model
from tensorflow.keras.models import Model, load_model
from tensorflow.keras.layers import (Dense, Input, InputLayer, Normalization, Flatten,BatchNormalization,
    Dropout,Conv2D, GlobalAveragePooling2D, MaxPooling2D, ReLU, Layer,Activation, Multiply, AveragePooling2D,
    Add, RandomRotation,Resizing, Rescaling, Reshape, Concatenate, concatenate, Lambda,LeakyReLU, ZeroPadding2D)
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, LearningRateScheduler, ReduceLROnPlateau, TerminateOnNaN, TensorBoard, CSVLogger, Callback
from tensorflow.keras.backend import clear_session
from tensorflow.keras.optimizers import RMSprop, SGD, Adam, Adagrad, Adamax
from tensorflow.keras.regularizers import l2, L2
from tensorflow.keras.optimizers.schedules import *
from tensorflow.keras.metrics import FalseNegatives, categorical_crossentropy, sparse_categorical_crossentropy
from tensorflow.keras.losses import CategoricalCrossentropy
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.image import *
from tensorflow_addons.optimizers import SWA
from tensorflow.keras import layers

from kerastuner.tuners import Hyperband
from kerastuner import RandomSearch
from kerastuner import HyperModel
# Setting a seaborn style
sns.set(style="whitegrid")

Set the seed of this notebook¶

In [6]:
seed = 32
tf.random.set_seed(seed)
np.random.seed(seed)

__Check for GPU__¶


Here is check the available GPUs and set the memory growth

In [7]:
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
    try:
        for gpu in gpus:
            tf.config.experimental.set_memory_growth(gpu, True)

        logical_gpus = tf.config.experimental.list_logical_devices('GPU')
        print(f"{len(gpus)} Physical GPUs, {len(logical_gpus)} Logical GPU")
    except RuntimeError as e:
        print(e)
1 Physical GPUs, 1 Logical GPU
In [8]:
!nvidia-smi
Sun Nov 26 16:01:29 2023       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03             Driver Version: 535.129.03   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 4090        On  | 00000000:03:00.0 Off |                  Off |
|  0%   35C    P2              51W / 450W |   5471MiB / 24564MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
+---------------------------------------------------------------------------------------+

__Import dataset__¶


Here is to import the dataset and proceed to do analysis on it

128 x 128 Images¶

In [9]:
data_big = tf.keras.utils.image_dataset_from_directory('Dataset for CA1 part A/train'  ,
                                                   color_mode='rgb',
                                                   image_size=(128,128))
data_big
Found 9028 files belonging to 15 classes.
Out[9]:
<BatchDataset element_spec=(TensorSpec(shape=(None, 128, 128, 3), dtype=tf.float32, name=None), TensorSpec(shape=(None,), dtype=tf.int32, name=None))>

Train Data¶

In [10]:
X_train_big = []
y_train_big = []

for images, labels in tqdm(data_big):
    images = tf.image.rgb_to_grayscale(images)
    X_train_big.append(images)
    y_train_big.append(labels)

X_train_big = np.concatenate(X_train_big, axis=0)
X_train_big = np.squeeze(X_train_big, axis=-1)
y_train_big = np.concatenate(y_train_big, axis=0)
100%|██████████| 283/283 [00:02<00:00, 116.51it/s]

Validation Data¶

In [11]:
val_data_big = tf.keras.utils.image_dataset_from_directory('Dataset for CA1 part A/validation'  ,
                                                   color_mode='rgb',
                                                   image_size=(128,128))
val_data_big
Found 3000 files belonging to 15 classes.
Out[11]:
<BatchDataset element_spec=(TensorSpec(shape=(None, 128, 128, 3), dtype=tf.float32, name=None), TensorSpec(shape=(None,), dtype=tf.int32, name=None))>
In [12]:
X_val_big = []
y_val_big = []

for images, labels in tqdm(val_data_big):
    images = tf.image.rgb_to_grayscale(images)
    X_val_big.append(images)
    y_val_big.append(labels)

X_val_big = np.concatenate(X_val_big, axis=0)
X_val_big = np.squeeze(X_val_big, axis=-1)
y_val_big = np.concatenate(y_val_big, axis=0)
100%|██████████| 94/94 [00:00<00:00, 170.01it/s]

Test Data¶

In [13]:
test_data_big = tf.keras.utils.image_dataset_from_directory('Dataset for CA1 part A/test'  ,
                                                   color_mode='rgb',
                                                   image_size=(128,128))
test_data_big
Found 3000 files belonging to 15 classes.
Out[13]:
<BatchDataset element_spec=(TensorSpec(shape=(None, 128, 128, 3), dtype=tf.float32, name=None), TensorSpec(shape=(None,), dtype=tf.int32, name=None))>
In [14]:
X_test_big = []
y_test_big = []

for images, labels in tqdm(test_data_big):
    images = tf.image.rgb_to_grayscale(images)
    X_test_big.append(images)
    y_test_big.append(labels)

X_test_big = np.concatenate(X_test_big, axis=0)
X_test_big = np.squeeze(X_test_big, axis=-1)
y_test_big = np.concatenate(y_test_big, axis=0)
100%|██████████| 94/94 [00:00<00:00, 167.22it/s]
In [15]:
labels_dict = os.listdir('Dataset for CA1 part A/train')
labels_dict = {idx: label for idx, label in enumerate(labels_dict)}
print(labels_dict)
{0: 'Bean', 1: 'Bitter_Gourd', 2: 'Bottle_Gourd', 3: 'Brinjal', 4: 'Broccoli', 5: 'Cabbage', 6: 'Capsicum', 7: 'Carrot', 8: 'Cauliflower', 9: 'Cucumber', 10: 'Papaya', 11: 'Potato', 12: 'Pumpkin', 13: 'Radish', 14: 'Tomato'}

__Required Functions__¶


Here is to define all the required functions

In [16]:
LR = 0.01
MOMENTUM = 0.9
WEIGHT_DECAY = 0.0005
val_split = 0.2
max_epochs = 100
In [17]:
y_train_big = to_categorical(y_train_big)
y_val_big = to_categorical(y_val_big)

VGG 16 Model¶

In [18]:
def vgg_conv_layer(filters, kernel_size=3, activation='relu', weight_decay=WEIGHT_DECAY):
    return Sequential([
        Conv2D(filters, kernel_size, padding='same', activation=None, kernel_regularizer=l2(weight_decay)),
        BatchNormalization(),
        ReLU() if activation == 'relu' else activation
    ])

def vgg_conv_block(no_layers, filters, activation='relu'):
    block = Sequential()
    for _ in range(no_layers):
        block.add(vgg_conv_layer(filters, activation=activation))
    block.add(MaxPooling2D(pool_size=(2, 2), strides=2))
    return block

class VGGNet(Model):
    def __init__(self, input_shape, num_classes, name='VGGNet_Baseline'):
        super(VGGNet, self).__init__(name=name)
        self.vgg_blocks = [
            vgg_conv_block(2, 32, activation='relu'),
            vgg_conv_block(3, 64, activation='relu'),
            vgg_conv_block(3, 128, activation='relu'),
            vgg_conv_block(3, 256, activation='relu'),
        ]
        self.global_pool = GlobalAveragePooling2D()
        self.classifier = Sequential([
            Dropout(0.3),
            Dense(num_classes, activation='softmax')
        ])

    def call(self, inputs):
        x = inputs
        for block in self.vgg_blocks:
            x = block(x)
        x = self.global_pool(x)
        return self.classifier(x)

    def build(self, input_shape):
        inputs = Input(shape=(128,128,1))
        model = Model(inputs=inputs, outputs=self.call(inputs), name=self.name)
        return model
input_shape = (128, 128, 1)
num_classes = 15
vgg = VGGNet(input_shape, num_classes)
vgg_model = vgg.build(input_shape)
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)
vgg_model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])

CNN Baseline 2¶

In [19]:
def CNN2(name='CNN2_Baseline'):

    filters_list = [32, 64, 128, 128, 128, 128]

    model = Sequential()

    # Add initial convolutional layers
    model.add(Conv2D(filters=filters_list[0], kernel_size=(3, 3), padding='same', input_shape=(128, 128, 1)))
    model.add(BatchNormalization())
    model.add(ReLU())

    model.add(Conv2D(filters=filters_list[1], kernel_size=(3, 3), padding='same'))
    model.add(BatchNormalization())
    model.add(ReLU())

    for filters in filters_list[2:]:
        model.add(Conv2D(filters=filters, kernel_size=(3, 3), padding='same'))
        model.add(BatchNormalization())
        model.add(ReLU())

    model.add(GlobalAveragePooling2D())
    model.add(Dense(256, activation='relu'))
    model.add(Dense(15, activation='softmax')) 

    model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
    model.summary()
    return model
In [20]:
no = lambda x : x
def data_before_train(self, aug_func=no):
    train_ds = tf.data.Dataset.from_tensor_slices((X_train_big, y_train_big)).map(lambda x, y : (expand_dims(x, -1), y))
    val_ds = tf.data.Dataset.from_tensor_slices((X_val_big, y_val_big)).shuffle(128 * 100).batch(128).map(lambda x, y : (expand_dims(x, -1), y)).prefetch(tf.data.AUTOTUNE)
    train_ds = train_ds.map(lambda x, y : (aug_func(x), y), num_parallel_calls=tf.data.AUTOTUNE).shuffle(128 * 100).batch(128).prefetch(tf.data.AUTOTUNE)
    return train_ds, val_ds
train_ds, val_ds = data_before_train(no)
In [21]:
def evaluation_test(model, X_val, y_val, LABELS):
    final_predictions = model.predict(X_val)
    final_predictions_final = np.argmax(final_predictions, axis=1)
    y_test_final = np.argmax(y_val, axis=1) if y_val.ndim == 2 else y_val
    classification_results = classification_report(y_test_final, final_predictions_final, target_names=LABELS.values(), output_dict=True)

    print('Accuracy:', accuracy_score(y_test_final, final_predictions_final))
    result = pd.DataFrame(classification_results).transpose().sort_values('f1-score')
    return result
In [22]:
def plot_classification_heatmap(df):
    heatmap_data = df.drop(['accuracy', 'macro avg', 'weighted avg']).iloc[:, :-1]
    plt.figure(figsize=(10, 8))
    sns.heatmap(heatmap_data, annot=True, fmt=".2f", cmap="Blues")
    plt.title("Classification Report Heatmap")
    plt.show()
In [23]:
class Evaluator:
  def __init__(self, path=None, project_name="CNN_CA1"):
      # Initialize wandb
      self.project_name = project_name

      # Your existing initialization code
      if path:
          self.result = pd.read_csv(path, sep=';')
      else:
          cols = ['Model Name', 'Batch Size', "Train Loss", "Test Loss", "Train Acc", "Test Acc", "Remarks"]
          self.result = pd.DataFrame(columns=cols)

      self.callback = [
          TerminateOnNaN()
      ]
      # self.api = wandb.Api()

  def train_model(self, model, train, val, hyperparameters, callbacks):
    tf.keras.backend.clear_session()
    epochs = hyperparameters['max_epochs']
    batch_size = 64

    if val is None:
      X_train, y_train = train
      return model.fit(X_train, y_train, epochs=epochs, batch_size=64, validation_split = hyperparameters["val_split"], callbacks=callbacks).history
    else:
      return model.fit(train, validation_data = val, epochs = epochs, batch_size = batch_size , callbacks=callbacks).history

  def model_evaluate(self, train, val, model, hyperparameters, callbacks=None, plot_loss=True, remarks=""):
      tf.keras.backend.clear_session()
      # wandb.init(project=self.project_name)
      callbacks = (callbacks or [EarlyStopping(monitor='val_accuracy', patience=10, restore_best_weights=True), ReduceLROnPlateau(patience=5)]) + self.callback
      history = self.train_model(model, train, val, hyperparameters, callbacks)
      bestval_index = np.argmax(history['val_accuracy'])


      if plot_loss:
          try:
              fig = plot_model_history(history)
          except Exception as e:
              print(e, 'here')
              print("error creating loss curve")
              fig = None

      result = {
          "Model Name": model.name,
          "Epochs": len(history["loss"]),
          "Batch Size": hyperparameters["batch_size"],
          "Train Loss": history["loss"][bestval_index],
          "Test Loss": history["val_loss"][bestval_index],
          "Train Acc": history["accuracy"][bestval_index],
          "Test Acc": history["val_accuracy"][bestval_index]
      }
      # wandb.log({
      #     "Model Name": model.name,
      #     "Epochs": len(history["loss"]),
      #     "Batch Size": hyperparameters["batch_size"],
      #     "Train Loss": history["loss"][bestval_index],
      #     "Test Loss": history["val_loss"][bestval_index],
      #     "Train Acc": history["accuracy"][bestval_index],
      #     "Test Acc": history["val_accuracy"][bestval_index],
      # })

      # wandb.finish()
      return result, fig

  def _train_model(self, model, training_data, validation_data, hyperparameters, callbacks):
      tf.keras.backend.clear_session()
      epochs = hyperparameters["max_epochs"]
      batch_size = hyperparameters["batch_size"]

      if validation_data is None:
          X_train, y_train = training_data
          return model.fit(X_train, y_train, epochs=epochs, batch_size=batch_size, validation_split=hyperparameters["val_split"], callbacks=callbacks).history

      return model.fit(training_data, validation_data=validation_data, epochs=epochs, batch_size=batch_size, callbacks=callbacks).history


  def return_history(self, project_name='CNN_CA1', entity=None, include_cols=None):
      if include_cols is None:
          include_cols = ['Model Name', 'Epochs', 'Batch Size', 'Train Loss','Test Loss','Train Acc','Test Acc','Remarks']

      runs = self.api.runs('111da/CNN_CA1')
      # List to store results
      results = []

      for run in runs:
          run_summary = run.summary
          data = {}
          for col in include_cols:
              data[col] = run_summary.get(col, None)
          results.append(data)
      df = pd.DataFrame(results)

      return df


  def remove_model(self, model_name):
      self.result = self.result[self.result["Model Name"] != model_name]
In [24]:
def plot_model_history(model_history):
    history_df = pd.DataFrame(model_history)
    fig, axs = plt.subplots(1, 2, figsize=(14, 5))

    axs[0].plot(history_df['loss'], 'g--', label='Training Loss')
    axs[0].plot(history_df['val_loss'], 'b-', label='Validation Loss')
    axs[0].set_title('Training and Validation Loss')
    axs[0].set_xlabel('Epochs')
    axs[0].set_ylabel('Loss')
    axs[0].legend()
    axs[0].grid(True)

    if 'accuracy' in history_df and 'val_accuracy' in history_df:
        axs[1].plot(history_df['accuracy'], 'g--', label='Training Accuracy')
        axs[1].plot(history_df['val_accuracy'], 'b-', label='Validation Accuracy')
        axs[1].set_title('Training and Validation Accuracy')
        axs[1].set_xlabel('Epochs')
        axs[1].set_ylabel('Accuracy')
        axs[1].legend()
        axs[1].grid(True)

    plt.tight_layout()
    plt.close(fig)

    def show():
        display(fig)

    return show

__Feature Engineering - Data Augmentation__¶


  • Model may not be as stable as we wnat hence
  • Is it possible to improve further with data augmentation ?
In [25]:
evaluator = Evaluator()
In [26]:
columns = ["Model Name", "Epochs", "Batch Size", "Train Loss", "Test Loss", "Train Acc", "Test Acc",'Kappa', "Comments"]
overall = pd.DataFrame(columns=columns)
print(overall)
Empty DataFrame
Columns: [Model Name, Epochs, Batch Size, Train Loss, Test Loss, Train Acc, Test Acc, Kappa, Comments]
Index: []

Setting up the baseParams¶

In [27]:
base_hparams = {"val_split" : val_split, "max_epochs" : max_epochs,"batch_size" : 128}
steps_per_epoch = np.ceil(len(X_train_big) / 128)
steps_per_epoch
Out[27]:
71.0
In [28]:
def models_array():
    def baseline_model2(optimizer='adam', name='CNN_Baseline'):
        model = Sequential(name=name)

        # Input layer
        model.add(Input(shape=(128, 128, 1)))

        # First Convolutional Block
        model.add(Conv2D(64, (3, 3), padding='same', activation='relu'))
        model.add(BatchNormalization())
        model.add(MaxPooling2D(pool_size=(2, 2)))
        model.add(Dropout(0.3))

        # Second Convolutional Block
        model.add(Conv2D(128, (3, 3), padding='same', activation='relu'))
        model.add(BatchNormalization())
        model.add(MaxPooling2D(pool_size=(2, 2)))
        model.add(Dropout(0.3))

        # Third Convolutional Block
        model.add(Conv2D(256, (3, 3), padding='same', activation='relu'))
        model.add(BatchNormalization())
        model.add(MaxPooling2D(pool_size=(2, 2)))
        model.add(Dropout(0.4))

        model.add(Flatten())

        # Dense Block
        model.add(Dense(512, activation='relu'))
        model.add(BatchNormalization())
        model.add(Dropout(0.5))

        # Dense Block
        model.add(Dense(128, activation='relu'))
        model.add(BatchNormalization())
        model.add(Dropout(0.5))

        # Output Layer
        model.add(Dense(15, activation='softmax'))

        # Compile the model
        model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])

        model.summary()

        return model
    input_shape = (128, 128, 1)
    num_classes = 15
    
    vgg = VGGNet(input_shape, num_classes)
    vgg_model = vgg.build(input_shape)
    optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)
    vgg_model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
    
    cnn2 = CNN2()
    cnn = baseline_model2()
    models = [cnn, cnn2, vgg_model ]
    return models

model_names = ['CNN','CNN2','VGG_Baseline']

Basic Data Augmentation¶

Data Augmentation 1¶

  • Provide flip to image horizontally and vertically randomly
  • Randomly rotate the image of 90 degrees
In [29]:
def data_augmentation_1(image):
    image = tf.image.random_flip_left_right(image)
    image = tf.image.random_flip_up_down(image)
    image = tf.image.rot90(image, k=tf.random.uniform(shape=[], minval=0, maxval=4, dtype=tf.int32))    
    return image

Process the Train and Validation data¶

In [30]:
def preprocess2(image, label):
    if len(image.shape) == 2:
        image = tf.expand_dims(image, axis=-1)
    image = data_augmentation_1(image)
    return image, label
In [31]:
train_ds_basic = tf.data.Dataset.from_tensor_slices((X_train_big, y_train_big))
train_ds_basic = train_ds_basic.map(preprocess2).batch(128).prefetch(tf.data.AUTOTUNE)

val_ds_basic = tf.data.Dataset.from_tensor_slices((X_val_big, y_val_big))
val_ds_basic = val_ds_basic.map(preprocess2).batch(128).prefetch(tf.data.AUTOTUNE)

Visualise the images¶

In [32]:
def visualisation(dataset, num_images, type, type2):
    plt.figure(figsize=(num_images * 3, 3))
    plt.suptitle(f'Image Visualization {type} on {type2} dataset')
    for images, labels in dataset.take(1):
        for i in range(num_images):
            ax = plt.subplot(1, num_images, i + 1)
            plt.imshow(images[i].numpy().squeeze(), cmap='gray')  
            plt.axis("off")
    plt.show()
In [33]:
visualisation(train_ds_basic,5,'(basic)','train')

Run the model¶

In [34]:
figures = []
models = models_array()
for i in range(len(models)):
    print(f'Running {model_names[i]}')
    results, fig = evaluator.model_evaluate( train_ds_basic, val_ds_basic , models[i], base_hparams)
    results['Model Name'] = f'{model_names[i]} basic one'
    y_pred = models[i].predict(X_test_big)
    y_pred_classes = np.argmax(y_pred, axis=1)
    kappa = cohen_kappa_score(y_test_big, y_pred_classes)
    print("Cohen’s Kappa Score:", kappa)
    results['Kappa'] = kappa
    overall = pd.concat([overall, pd.DataFrame([results])], ignore_index=True)
    figures.append(fig)
    
Model: "sequential_32"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_22 (Conv2D)          (None, 128, 128, 32)      320       
                                                                 
 batch_normalization_22 (Bat  (None, 128, 128, 32)     128       
 chNormalization)                                                
                                                                 
 re_lu_22 (ReLU)             (None, 128, 128, 32)      0         
                                                                 
 conv2d_23 (Conv2D)          (None, 128, 128, 64)      18496     
                                                                 
 batch_normalization_23 (Bat  (None, 128, 128, 64)     256       
 chNormalization)                                                
                                                                 
 re_lu_23 (ReLU)             (None, 128, 128, 64)      0         
                                                                 
 conv2d_24 (Conv2D)          (None, 128, 128, 128)     73856     
                                                                 
 batch_normalization_24 (Bat  (None, 128, 128, 128)    512       
 chNormalization)                                                
                                                                 
 re_lu_24 (ReLU)             (None, 128, 128, 128)     0         
                                                                 
 conv2d_25 (Conv2D)          (None, 128, 128, 128)     147584    
                                                                 
 batch_normalization_25 (Bat  (None, 128, 128, 128)    512       
 chNormalization)                                                
                                                                 
 re_lu_25 (ReLU)             (None, 128, 128, 128)     0         
                                                                 
 conv2d_26 (Conv2D)          (None, 128, 128, 128)     147584    
                                                                 
 batch_normalization_26 (Bat  (None, 128, 128, 128)    512       
 chNormalization)                                                
                                                                 
 re_lu_26 (ReLU)             (None, 128, 128, 128)     0         
                                                                 
 conv2d_27 (Conv2D)          (None, 128, 128, 128)     147584    
                                                                 
 batch_normalization_27 (Bat  (None, 128, 128, 128)    512       
 chNormalization)                                                
                                                                 
 re_lu_27 (ReLU)             (None, 128, 128, 128)     0         
                                                                 
 global_average_pooling2d_2   (None, 128)              0         
 (GlobalAveragePooling2D)                                        
                                                                 
 dense_2 (Dense)             (None, 256)               33024     
                                                                 
 dense_3 (Dense)             (None, 15)                3855      
                                                                 
=================================================================
Total params: 574,735
Trainable params: 573,519
Non-trainable params: 1,216
_________________________________________________________________
Model: "CNN_Baseline"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_28 (Conv2D)          (None, 128, 128, 64)      640       
                                                                 
 batch_normalization_28 (Bat  (None, 128, 128, 64)     256       
 chNormalization)                                                
                                                                 
 max_pooling2d_8 (MaxPooling  (None, 64, 64, 64)       0         
 2D)                                                             
                                                                 
 dropout_2 (Dropout)         (None, 64, 64, 64)        0         
                                                                 
 conv2d_29 (Conv2D)          (None, 64, 64, 128)       73856     
                                                                 
 batch_normalization_29 (Bat  (None, 64, 64, 128)      512       
 chNormalization)                                                
                                                                 
 max_pooling2d_9 (MaxPooling  (None, 32, 32, 128)      0         
 2D)                                                             
                                                                 
 dropout_3 (Dropout)         (None, 32, 32, 128)       0         
                                                                 
 conv2d_30 (Conv2D)          (None, 32, 32, 256)       295168    
                                                                 
 batch_normalization_30 (Bat  (None, 32, 32, 256)      1024      
 chNormalization)                                                
                                                                 
 max_pooling2d_10 (MaxPoolin  (None, 16, 16, 256)      0         
 g2D)                                                            
                                                                 
 dropout_4 (Dropout)         (None, 16, 16, 256)       0         
                                                                 
 flatten (Flatten)           (None, 65536)             0         
                                                                 
 dense_4 (Dense)             (None, 512)               33554944  
                                                                 
 batch_normalization_31 (Bat  (None, 512)              2048      
 chNormalization)                                                
                                                                 
 dropout_5 (Dropout)         (None, 512)               0         
                                                                 
 dense_5 (Dense)             (None, 128)               65664     
                                                                 
 batch_normalization_32 (Bat  (None, 128)              512       
 chNormalization)                                                
                                                                 
 dropout_6 (Dropout)         (None, 128)               0         
                                                                 
 dense_6 (Dense)             (None, 15)                1935      
                                                                 
=================================================================
Total params: 33,996,559
Trainable params: 33,994,383
Non-trainable params: 2,176
_________________________________________________________________
Running CNN
Epoch 1/100
71/71 [==============================] - 10s 90ms/step - loss: 2.7465 - accuracy: 0.2442 - val_loss: 5.1188 - val_accuracy: 0.0987 - lr: 0.0010
Epoch 2/100
71/71 [==============================] - 5s 76ms/step - loss: 1.7977 - accuracy: 0.4373 - val_loss: 3.6735 - val_accuracy: 0.1540 - lr: 0.0010
Epoch 3/100
71/71 [==============================] - 5s 77ms/step - loss: 1.2818 - accuracy: 0.5784 - val_loss: 4.6390 - val_accuracy: 0.1230 - lr: 0.0010
Epoch 4/100
71/71 [==============================] - 5s 77ms/step - loss: 1.0103 - accuracy: 0.6716 - val_loss: 2.3952 - val_accuracy: 0.3440 - lr: 0.0010
Epoch 5/100
71/71 [==============================] - 5s 74ms/step - loss: 0.7940 - accuracy: 0.7384 - val_loss: 3.7721 - val_accuracy: 0.2637 - lr: 0.0010
Epoch 6/100
71/71 [==============================] - 5s 74ms/step - loss: 0.6396 - accuracy: 0.7940 - val_loss: 1.5061 - val_accuracy: 0.5107 - lr: 0.0010
Epoch 7/100
71/71 [==============================] - 5s 77ms/step - loss: 0.4792 - accuracy: 0.8571 - val_loss: 1.3530 - val_accuracy: 0.5743 - lr: 0.0010
Epoch 8/100
71/71 [==============================] - 5s 77ms/step - loss: 0.3731 - accuracy: 0.8841 - val_loss: 1.3919 - val_accuracy: 0.5947 - lr: 0.0010
Epoch 9/100
71/71 [==============================] - 5s 74ms/step - loss: 0.2876 - accuracy: 0.9157 - val_loss: 1.9526 - val_accuracy: 0.5160 - lr: 0.0010
Epoch 10/100
71/71 [==============================] - 5s 77ms/step - loss: 0.2515 - accuracy: 0.9233 - val_loss: 1.7421 - val_accuracy: 0.5953 - lr: 0.0010
Epoch 11/100
71/71 [==============================] - 6s 79ms/step - loss: 0.1896 - accuracy: 0.9456 - val_loss: 1.1901 - val_accuracy: 0.6790 - lr: 0.0010
Epoch 12/100
71/71 [==============================] - 5s 76ms/step - loss: 0.1421 - accuracy: 0.9595 - val_loss: 3.5179 - val_accuracy: 0.4060 - lr: 0.0010
Epoch 13/100
71/71 [==============================] - 6s 78ms/step - loss: 0.1243 - accuracy: 0.9631 - val_loss: 0.9306 - val_accuracy: 0.7537 - lr: 0.0010
Epoch 14/100
71/71 [==============================] - 5s 76ms/step - loss: 0.1573 - accuracy: 0.9553 - val_loss: 3.1776 - val_accuracy: 0.4430 - lr: 0.0010
Epoch 15/100
71/71 [==============================] - 5s 76ms/step - loss: 0.1530 - accuracy: 0.9539 - val_loss: 3.5593 - val_accuracy: 0.4663 - lr: 0.0010
Epoch 16/100
71/71 [==============================] - 5s 76ms/step - loss: 0.0943 - accuracy: 0.9720 - val_loss: 2.5667 - val_accuracy: 0.5480 - lr: 0.0010
Epoch 17/100
71/71 [==============================] - 6s 79ms/step - loss: 0.0648 - accuracy: 0.9822 - val_loss: 0.7054 - val_accuracy: 0.8047 - lr: 0.0010
Epoch 18/100
71/71 [==============================] - 6s 78ms/step - loss: 0.0548 - accuracy: 0.9859 - val_loss: 0.7096 - val_accuracy: 0.7953 - lr: 0.0010
Epoch 19/100
71/71 [==============================] - 5s 76ms/step - loss: 0.0477 - accuracy: 0.9878 - val_loss: 2.1180 - val_accuracy: 0.5820 - lr: 0.0010
Epoch 20/100
71/71 [==============================] - 5s 72ms/step - loss: 0.0378 - accuracy: 0.9899 - val_loss: 2.0568 - val_accuracy: 0.6080 - lr: 0.0010
Epoch 21/100
71/71 [==============================] - 5s 75ms/step - loss: 0.0498 - accuracy: 0.9848 - val_loss: 1.6219 - val_accuracy: 0.6700 - lr: 0.0010
Epoch 22/100
71/71 [==============================] - 5s 75ms/step - loss: 0.0375 - accuracy: 0.9897 - val_loss: 1.4325 - val_accuracy: 0.6947 - lr: 0.0010
Epoch 23/100
71/71 [==============================] - 5s 74ms/step - loss: 0.0320 - accuracy: 0.9915 - val_loss: 0.6269 - val_accuracy: 0.8293 - lr: 1.0000e-04
Epoch 24/100
71/71 [==============================] - 5s 75ms/step - loss: 0.0252 - accuracy: 0.9944 - val_loss: 0.6727 - val_accuracy: 0.8273 - lr: 1.0000e-04
Epoch 25/100
71/71 [==============================] - 5s 76ms/step - loss: 0.0246 - accuracy: 0.9945 - val_loss: 0.8069 - val_accuracy: 0.8067 - lr: 1.0000e-04
Epoch 26/100
71/71 [==============================] - 5s 76ms/step - loss: 0.0237 - accuracy: 0.9940 - val_loss: 0.5632 - val_accuracy: 0.8397 - lr: 1.0000e-04
Epoch 27/100
71/71 [==============================] - 5s 75ms/step - loss: 0.0219 - accuracy: 0.9952 - val_loss: 0.6007 - val_accuracy: 0.8380 - lr: 1.0000e-04
Epoch 28/100
71/71 [==============================] - 5s 74ms/step - loss: 0.0179 - accuracy: 0.9967 - val_loss: 0.6234 - val_accuracy: 0.8400 - lr: 1.0000e-04
Epoch 29/100
71/71 [==============================] - 5s 77ms/step - loss: 0.0184 - accuracy: 0.9965 - val_loss: 0.6709 - val_accuracy: 0.8303 - lr: 1.0000e-04
Epoch 30/100
71/71 [==============================] - 5s 75ms/step - loss: 0.0162 - accuracy: 0.9965 - val_loss: 0.7948 - val_accuracy: 0.8030 - lr: 1.0000e-04
Epoch 31/100
71/71 [==============================] - 5s 75ms/step - loss: 0.0182 - accuracy: 0.9949 - val_loss: 0.5968 - val_accuracy: 0.8400 - lr: 1.0000e-04
Epoch 32/100
71/71 [==============================] - 5s 74ms/step - loss: 0.0142 - accuracy: 0.9976 - val_loss: 0.6054 - val_accuracy: 0.8390 - lr: 1.0000e-05
Epoch 33/100
71/71 [==============================] - 5s 75ms/step - loss: 0.0155 - accuracy: 0.9970 - val_loss: 0.6214 - val_accuracy: 0.8367 - lr: 1.0000e-05
Epoch 34/100
71/71 [==============================] - 5s 73ms/step - loss: 0.0160 - accuracy: 0.9971 - val_loss: 0.6688 - val_accuracy: 0.8297 - lr: 1.0000e-05
Epoch 35/100
71/71 [==============================] - 5s 73ms/step - loss: 0.0144 - accuracy: 0.9976 - val_loss: 0.6371 - val_accuracy: 0.8337 - lr: 1.0000e-05
Epoch 36/100
71/71 [==============================] - 5s 76ms/step - loss: 0.0149 - accuracy: 0.9978 - val_loss: 0.6413 - val_accuracy: 0.8340 - lr: 1.0000e-05
Epoch 37/100
71/71 [==============================] - 5s 74ms/step - loss: 0.0139 - accuracy: 0.9980 - val_loss: 0.6445 - val_accuracy: 0.8340 - lr: 1.0000e-06
Epoch 38/100
71/71 [==============================] - 5s 74ms/step - loss: 0.0154 - accuracy: 0.9969 - val_loss: 0.6449 - val_accuracy: 0.8347 - lr: 1.0000e-06
94/94 [==============================] - 1s 5ms/step
Cohen’s Kappa Score: 0.8392857142857143
Running CNN2
Epoch 1/100
71/71 [==============================] - 21s 270ms/step - loss: 1.7824 - accuracy: 0.4523 - val_loss: 5.1106 - val_accuracy: 0.0773 - lr: 0.0010
Epoch 2/100
71/71 [==============================] - 18s 254ms/step - loss: 1.1809 - accuracy: 0.6377 - val_loss: 2.5932 - val_accuracy: 0.3307 - lr: 0.0010
Epoch 3/100
71/71 [==============================] - 18s 254ms/step - loss: 0.8713 - accuracy: 0.7387 - val_loss: 3.0263 - val_accuracy: 0.2913 - lr: 0.0010
Epoch 4/100
71/71 [==============================] - 18s 254ms/step - loss: 0.6888 - accuracy: 0.7880 - val_loss: 6.6438 - val_accuracy: 0.2480 - lr: 0.0010
Epoch 5/100
71/71 [==============================] - 18s 255ms/step - loss: 0.5522 - accuracy: 0.8343 - val_loss: 6.6184 - val_accuracy: 0.2293 - lr: 0.0010
Epoch 6/100
71/71 [==============================] - 18s 255ms/step - loss: 0.4529 - accuracy: 0.8652 - val_loss: 2.9774 - val_accuracy: 0.5367 - lr: 0.0010
Epoch 7/100
71/71 [==============================] - 18s 255ms/step - loss: 0.3810 - accuracy: 0.8871 - val_loss: 2.0030 - val_accuracy: 0.5687 - lr: 0.0010
Epoch 8/100
71/71 [==============================] - 18s 255ms/step - loss: 0.3258 - accuracy: 0.9024 - val_loss: 4.0875 - val_accuracy: 0.3987 - lr: 0.0010
Epoch 9/100
71/71 [==============================] - 18s 255ms/step - loss: 0.2842 - accuracy: 0.9153 - val_loss: 4.3294 - val_accuracy: 0.4397 - lr: 0.0010
Epoch 10/100
71/71 [==============================] - 18s 255ms/step - loss: 0.2552 - accuracy: 0.9231 - val_loss: 1.6425 - val_accuracy: 0.5740 - lr: 0.0010
Epoch 11/100
71/71 [==============================] - 18s 255ms/step - loss: 0.2200 - accuracy: 0.9363 - val_loss: 2.6448 - val_accuracy: 0.5767 - lr: 0.0010
Epoch 12/100
71/71 [==============================] - 18s 255ms/step - loss: 0.1936 - accuracy: 0.9445 - val_loss: 5.4075 - val_accuracy: 0.3933 - lr: 0.0010
Epoch 13/100
71/71 [==============================] - 18s 255ms/step - loss: 0.1754 - accuracy: 0.9516 - val_loss: 0.8237 - val_accuracy: 0.7467 - lr: 0.0010
Epoch 14/100
71/71 [==============================] - 18s 255ms/step - loss: 0.1548 - accuracy: 0.9561 - val_loss: 1.8084 - val_accuracy: 0.5583 - lr: 0.0010
Epoch 15/100
71/71 [==============================] - 18s 255ms/step - loss: 0.1401 - accuracy: 0.9605 - val_loss: 0.5111 - val_accuracy: 0.8337 - lr: 0.0010
Epoch 16/100
71/71 [==============================] - 18s 255ms/step - loss: 0.1276 - accuracy: 0.9647 - val_loss: 0.8670 - val_accuracy: 0.7673 - lr: 0.0010
Epoch 17/100
71/71 [==============================] - 18s 255ms/step - loss: 0.1178 - accuracy: 0.9691 - val_loss: 0.3603 - val_accuracy: 0.8790 - lr: 0.0010
Epoch 18/100
71/71 [==============================] - 18s 255ms/step - loss: 0.1050 - accuracy: 0.9713 - val_loss: 0.7675 - val_accuracy: 0.7690 - lr: 0.0010
Epoch 19/100
71/71 [==============================] - 18s 255ms/step - loss: 0.1014 - accuracy: 0.9726 - val_loss: 1.3476 - val_accuracy: 0.7067 - lr: 0.0010
Epoch 20/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0888 - accuracy: 0.9773 - val_loss: 0.9765 - val_accuracy: 0.7357 - lr: 0.0010
Epoch 21/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0844 - accuracy: 0.9790 - val_loss: 1.3821 - val_accuracy: 0.6623 - lr: 0.0010
Epoch 22/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0823 - accuracy: 0.9795 - val_loss: 3.4315 - val_accuracy: 0.4403 - lr: 0.0010
Epoch 23/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0519 - accuracy: 0.9877 - val_loss: 0.2091 - val_accuracy: 0.9337 - lr: 1.0000e-04
Epoch 24/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0360 - accuracy: 0.9945 - val_loss: 0.1241 - val_accuracy: 0.9643 - lr: 1.0000e-04
Epoch 25/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0325 - accuracy: 0.9950 - val_loss: 0.1048 - val_accuracy: 0.9710 - lr: 1.0000e-04
Epoch 26/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0303 - accuracy: 0.9955 - val_loss: 0.0962 - val_accuracy: 0.9733 - lr: 1.0000e-04
Epoch 27/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0287 - accuracy: 0.9960 - val_loss: 0.0924 - val_accuracy: 0.9750 - lr: 1.0000e-04
Epoch 28/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0273 - accuracy: 0.9965 - val_loss: 0.0896 - val_accuracy: 0.9763 - lr: 1.0000e-04
Epoch 29/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0262 - accuracy: 0.9968 - val_loss: 0.0883 - val_accuracy: 0.9763 - lr: 1.0000e-04
Epoch 30/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0252 - accuracy: 0.9975 - val_loss: 0.0874 - val_accuracy: 0.9767 - lr: 1.0000e-04
Epoch 31/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0242 - accuracy: 0.9977 - val_loss: 0.0847 - val_accuracy: 0.9773 - lr: 1.0000e-04
Epoch 32/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0234 - accuracy: 0.9980 - val_loss: 0.0831 - val_accuracy: 0.9770 - lr: 1.0000e-04
Epoch 33/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0226 - accuracy: 0.9980 - val_loss: 0.0819 - val_accuracy: 0.9773 - lr: 1.0000e-04
Epoch 34/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0218 - accuracy: 0.9981 - val_loss: 0.0806 - val_accuracy: 0.9773 - lr: 1.0000e-04
Epoch 35/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0211 - accuracy: 0.9983 - val_loss: 0.0803 - val_accuracy: 0.9773 - lr: 1.0000e-04
Epoch 36/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0204 - accuracy: 0.9986 - val_loss: 0.0814 - val_accuracy: 0.9777 - lr: 1.0000e-04
Epoch 37/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0198 - accuracy: 0.9987 - val_loss: 0.0809 - val_accuracy: 0.9770 - lr: 1.0000e-04
Epoch 38/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0192 - accuracy: 0.9989 - val_loss: 0.0810 - val_accuracy: 0.9767 - lr: 1.0000e-04
Epoch 39/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0186 - accuracy: 0.9991 - val_loss: 0.0804 - val_accuracy: 0.9763 - lr: 1.0000e-04
Epoch 40/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0180 - accuracy: 0.9991 - val_loss: 0.0805 - val_accuracy: 0.9770 - lr: 1.0000e-04
Epoch 41/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0163 - accuracy: 0.9997 - val_loss: 0.0645 - val_accuracy: 0.9817 - lr: 1.0000e-05
Epoch 42/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0158 - accuracy: 0.9997 - val_loss: 0.0630 - val_accuracy: 0.9823 - lr: 1.0000e-05
Epoch 43/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0156 - accuracy: 0.9997 - val_loss: 0.0623 - val_accuracy: 0.9827 - lr: 1.0000e-05
Epoch 44/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0155 - accuracy: 0.9997 - val_loss: 0.0621 - val_accuracy: 0.9827 - lr: 1.0000e-05
Epoch 45/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0154 - accuracy: 0.9997 - val_loss: 0.0620 - val_accuracy: 0.9823 - lr: 1.0000e-05
Epoch 46/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0153 - accuracy: 0.9997 - val_loss: 0.0621 - val_accuracy: 0.9820 - lr: 1.0000e-05
Epoch 47/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0153 - accuracy: 0.9997 - val_loss: 0.0621 - val_accuracy: 0.9820 - lr: 1.0000e-05
Epoch 48/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0152 - accuracy: 0.9997 - val_loss: 0.0621 - val_accuracy: 0.9820 - lr: 1.0000e-05
Epoch 49/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0151 - accuracy: 0.9997 - val_loss: 0.0621 - val_accuracy: 0.9820 - lr: 1.0000e-05
Epoch 50/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0148 - accuracy: 0.9997 - val_loss: 0.0616 - val_accuracy: 0.9827 - lr: 1.0000e-06
Epoch 51/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0148 - accuracy: 0.9997 - val_loss: 0.0613 - val_accuracy: 0.9827 - lr: 1.0000e-06
Epoch 52/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0148 - accuracy: 0.9997 - val_loss: 0.0612 - val_accuracy: 0.9827 - lr: 1.0000e-06
Epoch 53/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0148 - accuracy: 0.9997 - val_loss: 0.0612 - val_accuracy: 0.9827 - lr: 1.0000e-06
94/94 [==============================] - 2s 18ms/step
Cohen’s Kappa Score: 0.985
Running VGG_Baseline
Epoch 1/100
71/71 [==============================] - 8s 88ms/step - loss: 3.6950 - accuracy: 0.2409 - val_loss: 68.6340 - val_accuracy: 0.0667 - lr: 0.0100
Epoch 2/100
71/71 [==============================] - 5s 75ms/step - loss: 2.4062 - accuracy: 0.3767 - val_loss: 12.7772 - val_accuracy: 0.0703 - lr: 0.0100
Epoch 3/100
71/71 [==============================] - 5s 75ms/step - loss: 2.2203 - accuracy: 0.4179 - val_loss: 5.5883 - val_accuracy: 0.0730 - lr: 0.0100
Epoch 4/100
71/71 [==============================] - 5s 76ms/step - loss: 2.0130 - accuracy: 0.4777 - val_loss: 6.7666 - val_accuracy: 0.1070 - lr: 0.0100
Epoch 5/100
71/71 [==============================] - 5s 77ms/step - loss: 1.9166 - accuracy: 0.5165 - val_loss: 3.4060 - val_accuracy: 0.1387 - lr: 0.0100
Epoch 6/100
71/71 [==============================] - 5s 75ms/step - loss: 1.8414 - accuracy: 0.5438 - val_loss: 3.8706 - val_accuracy: 0.1540 - lr: 0.0100
Epoch 7/100
71/71 [==============================] - 5s 76ms/step - loss: 1.7080 - accuracy: 0.5752 - val_loss: 7.4286 - val_accuracy: 0.0847 - lr: 0.0100
Epoch 8/100
71/71 [==============================] - 6s 78ms/step - loss: 1.6263 - accuracy: 0.5913 - val_loss: 3.1665 - val_accuracy: 0.1607 - lr: 0.0100
Epoch 9/100
71/71 [==============================] - 5s 73ms/step - loss: 1.5701 - accuracy: 0.6083 - val_loss: 3.0710 - val_accuracy: 0.1940 - lr: 0.0100
Epoch 10/100
71/71 [==============================] - 5s 75ms/step - loss: 1.5174 - accuracy: 0.6214 - val_loss: 3.5302 - val_accuracy: 0.2167 - lr: 0.0100
Epoch 11/100
71/71 [==============================] - 5s 77ms/step - loss: 1.4595 - accuracy: 0.6358 - val_loss: 3.2446 - val_accuracy: 0.2583 - lr: 0.0100
Epoch 12/100
71/71 [==============================] - 5s 76ms/step - loss: 1.5632 - accuracy: 0.6289 - val_loss: 5.4730 - val_accuracy: 0.1883 - lr: 0.0100
Epoch 13/100
71/71 [==============================] - 5s 76ms/step - loss: 1.4138 - accuracy: 0.6629 - val_loss: 3.0921 - val_accuracy: 0.2607 - lr: 0.0100
Epoch 14/100
71/71 [==============================] - 5s 73ms/step - loss: 1.3052 - accuracy: 0.6808 - val_loss: 3.6984 - val_accuracy: 0.2297 - lr: 0.0100
Epoch 15/100
71/71 [==============================] - 5s 76ms/step - loss: 1.0104 - accuracy: 0.7754 - val_loss: 2.1325 - val_accuracy: 0.3650 - lr: 1.0000e-03
Epoch 16/100
71/71 [==============================] - 5s 75ms/step - loss: 0.8821 - accuracy: 0.8048 - val_loss: 1.8328 - val_accuracy: 0.4193 - lr: 1.0000e-03
Epoch 17/100
71/71 [==============================] - 5s 75ms/step - loss: 0.8131 - accuracy: 0.8154 - val_loss: 1.9619 - val_accuracy: 0.3440 - lr: 1.0000e-03
Epoch 18/100
71/71 [==============================] - 5s 75ms/step - loss: 0.7454 - accuracy: 0.8334 - val_loss: 2.0232 - val_accuracy: 0.3390 - lr: 1.0000e-03
Epoch 19/100
71/71 [==============================] - 5s 76ms/step - loss: 0.6969 - accuracy: 0.8424 - val_loss: 1.7531 - val_accuracy: 0.4460 - lr: 1.0000e-03
Epoch 20/100
71/71 [==============================] - 5s 74ms/step - loss: 0.6519 - accuracy: 0.8588 - val_loss: 1.8676 - val_accuracy: 0.4400 - lr: 1.0000e-03
Epoch 21/100
71/71 [==============================] - 6s 77ms/step - loss: 0.5964 - accuracy: 0.8735 - val_loss: 1.4284 - val_accuracy: 0.5753 - lr: 1.0000e-03
Epoch 22/100
71/71 [==============================] - 5s 74ms/step - loss: 0.5669 - accuracy: 0.8831 - val_loss: 1.5772 - val_accuracy: 0.5330 - lr: 1.0000e-03
Epoch 23/100
71/71 [==============================] - 5s 76ms/step - loss: 0.5367 - accuracy: 0.8938 - val_loss: 1.5474 - val_accuracy: 0.5623 - lr: 1.0000e-03
Epoch 24/100
71/71 [==============================] - 5s 75ms/step - loss: 0.4984 - accuracy: 0.8999 - val_loss: 1.2087 - val_accuracy: 0.6750 - lr: 1.0000e-03
Epoch 25/100
71/71 [==============================] - 5s 75ms/step - loss: 0.4570 - accuracy: 0.9149 - val_loss: 1.0185 - val_accuracy: 0.7140 - lr: 1.0000e-03
Epoch 26/100
71/71 [==============================] - 5s 74ms/step - loss: 0.4353 - accuracy: 0.9207 - val_loss: 1.6481 - val_accuracy: 0.6143 - lr: 1.0000e-03
Epoch 27/100
71/71 [==============================] - 5s 75ms/step - loss: 0.4167 - accuracy: 0.9265 - val_loss: 1.1632 - val_accuracy: 0.6717 - lr: 1.0000e-03
Epoch 28/100
71/71 [==============================] - 5s 77ms/step - loss: 0.3879 - accuracy: 0.9349 - val_loss: 1.0086 - val_accuracy: 0.7383 - lr: 1.0000e-03
Epoch 29/100
71/71 [==============================] - 6s 78ms/step - loss: 0.3836 - accuracy: 0.9373 - val_loss: 0.7292 - val_accuracy: 0.8153 - lr: 1.0000e-03
Epoch 30/100
71/71 [==============================] - 5s 75ms/step - loss: 0.3672 - accuracy: 0.9374 - val_loss: 0.9760 - val_accuracy: 0.7563 - lr: 1.0000e-03
Epoch 31/100
71/71 [==============================] - 5s 74ms/step - loss: 0.3568 - accuracy: 0.9443 - val_loss: 0.9187 - val_accuracy: 0.7627 - lr: 1.0000e-03
Epoch 32/100
71/71 [==============================] - 5s 74ms/step - loss: 0.3317 - accuracy: 0.9513 - val_loss: 0.6257 - val_accuracy: 0.8483 - lr: 1.0000e-03
Epoch 33/100
71/71 [==============================] - 5s 73ms/step - loss: 0.3212 - accuracy: 0.9514 - val_loss: 0.5597 - val_accuracy: 0.8780 - lr: 1.0000e-03
Epoch 34/100
71/71 [==============================] - 5s 76ms/step - loss: 0.2989 - accuracy: 0.9591 - val_loss: 1.0661 - val_accuracy: 0.7450 - lr: 1.0000e-03
Epoch 35/100
71/71 [==============================] - 5s 74ms/step - loss: 0.3095 - accuracy: 0.9541 - val_loss: 0.6994 - val_accuracy: 0.8260 - lr: 1.0000e-03
Epoch 36/100
71/71 [==============================] - 5s 74ms/step - loss: 0.2983 - accuracy: 0.9580 - val_loss: 0.8261 - val_accuracy: 0.8040 - lr: 1.0000e-03
Epoch 37/100
71/71 [==============================] - 5s 75ms/step - loss: 0.2875 - accuracy: 0.9621 - val_loss: 0.8876 - val_accuracy: 0.7987 - lr: 1.0000e-03
Epoch 38/100
71/71 [==============================] - 5s 74ms/step - loss: 0.2918 - accuracy: 0.9584 - val_loss: 0.7718 - val_accuracy: 0.8107 - lr: 1.0000e-03
Epoch 39/100
71/71 [==============================] - 6s 77ms/step - loss: 0.2423 - accuracy: 0.9761 - val_loss: 0.3302 - val_accuracy: 0.9497 - lr: 1.0000e-04
Epoch 40/100
71/71 [==============================] - 5s 72ms/step - loss: 0.2138 - accuracy: 0.9866 - val_loss: 0.3020 - val_accuracy: 0.9590 - lr: 1.0000e-04
Epoch 41/100
71/71 [==============================] - 5s 76ms/step - loss: 0.2060 - accuracy: 0.9896 - val_loss: 0.2851 - val_accuracy: 0.9617 - lr: 1.0000e-04
Epoch 42/100
71/71 [==============================] - 5s 75ms/step - loss: 0.2014 - accuracy: 0.9891 - val_loss: 0.2725 - val_accuracy: 0.9650 - lr: 1.0000e-04
Epoch 43/100
71/71 [==============================] - 5s 75ms/step - loss: 0.1979 - accuracy: 0.9898 - val_loss: 0.2774 - val_accuracy: 0.9650 - lr: 1.0000e-04
Epoch 44/100
71/71 [==============================] - 5s 77ms/step - loss: 0.1927 - accuracy: 0.9924 - val_loss: 0.2649 - val_accuracy: 0.9673 - lr: 1.0000e-04
Epoch 45/100
71/71 [==============================] - 5s 74ms/step - loss: 0.1905 - accuracy: 0.9920 - val_loss: 0.2633 - val_accuracy: 0.9677 - lr: 1.0000e-04
Epoch 46/100
71/71 [==============================] - 5s 75ms/step - loss: 0.1876 - accuracy: 0.9935 - val_loss: 0.2694 - val_accuracy: 0.9660 - lr: 1.0000e-04
Epoch 47/100
71/71 [==============================] - 5s 75ms/step - loss: 0.1838 - accuracy: 0.9945 - val_loss: 0.2527 - val_accuracy: 0.9693 - lr: 1.0000e-04
Epoch 48/100
71/71 [==============================] - 5s 74ms/step - loss: 0.1817 - accuracy: 0.9953 - val_loss: 0.2561 - val_accuracy: 0.9690 - lr: 1.0000e-04
Epoch 49/100
71/71 [==============================] - 5s 74ms/step - loss: 0.1785 - accuracy: 0.9946 - val_loss: 0.2495 - val_accuracy: 0.9687 - lr: 1.0000e-04
Epoch 50/100
71/71 [==============================] - 5s 76ms/step - loss: 0.1759 - accuracy: 0.9960 - val_loss: 0.2458 - val_accuracy: 0.9690 - lr: 1.0000e-04
Epoch 51/100
71/71 [==============================] - 5s 75ms/step - loss: 0.1720 - accuracy: 0.9967 - val_loss: 0.2437 - val_accuracy: 0.9697 - lr: 1.0000e-04
Epoch 52/100
71/71 [==============================] - 5s 72ms/step - loss: 0.1718 - accuracy: 0.9959 - val_loss: 0.2506 - val_accuracy: 0.9693 - lr: 1.0000e-04
Epoch 53/100
71/71 [==============================] - 5s 74ms/step - loss: 0.1697 - accuracy: 0.9968 - val_loss: 0.2402 - val_accuracy: 0.9740 - lr: 1.0000e-04
Epoch 54/100
71/71 [==============================] - 5s 73ms/step - loss: 0.1681 - accuracy: 0.9961 - val_loss: 0.2409 - val_accuracy: 0.9720 - lr: 1.0000e-04
Epoch 55/100
71/71 [==============================] - 5s 75ms/step - loss: 0.1647 - accuracy: 0.9970 - val_loss: 0.2398 - val_accuracy: 0.9733 - lr: 1.0000e-04
Epoch 56/100
71/71 [==============================] - 5s 76ms/step - loss: 0.1632 - accuracy: 0.9970 - val_loss: 0.2594 - val_accuracy: 0.9640 - lr: 1.0000e-04
Epoch 57/100
71/71 [==============================] - 5s 75ms/step - loss: 0.1620 - accuracy: 0.9972 - val_loss: 0.2331 - val_accuracy: 0.9740 - lr: 1.0000e-04
Epoch 58/100
71/71 [==============================] - 5s 76ms/step - loss: 0.1597 - accuracy: 0.9970 - val_loss: 0.2388 - val_accuracy: 0.9697 - lr: 1.0000e-04
Epoch 59/100
71/71 [==============================] - 5s 76ms/step - loss: 0.1568 - accuracy: 0.9986 - val_loss: 0.2407 - val_accuracy: 0.9707 - lr: 1.0000e-04
Epoch 60/100
71/71 [==============================] - 5s 73ms/step - loss: 0.1549 - accuracy: 0.9977 - val_loss: 0.2338 - val_accuracy: 0.9727 - lr: 1.0000e-04
Epoch 61/100
71/71 [==============================] - 5s 74ms/step - loss: 0.1531 - accuracy: 0.9981 - val_loss: 0.2322 - val_accuracy: 0.9723 - lr: 1.0000e-04
Epoch 62/100
71/71 [==============================] - 5s 75ms/step - loss: 0.1516 - accuracy: 0.9981 - val_loss: 0.2278 - val_accuracy: 0.9737 - lr: 1.0000e-04
Epoch 63/100
71/71 [==============================] - 5s 76ms/step - loss: 0.1503 - accuracy: 0.9977 - val_loss: 0.2352 - val_accuracy: 0.9713 - lr: 1.0000e-04
94/94 [==============================] - 1s 6ms/step
Cohen’s Kappa Score: 0.9725

Analysing the Results¶

In [35]:
overall.iloc[-3:]
Out[35]:
Model Name Epochs Batch Size Train Loss Test Loss Train Acc Test Acc Kappa Comments
0 CNN basic one 38 128 0.017855 0.623409 0.996677 0.840000 0.839286 NaN
1 CNN2 basic one 53 128 0.015596 0.062343 0.999668 0.982667 0.985000 NaN
2 VGG_Baseline basic one 63 128 0.169651 0.240200 0.996788 0.974000 0.972500 NaN

Analyse the graph¶

In [36]:
for fig in figures:
    fig()
Things Observed
    For CNN, - Training and Validation loss - For the training loss, it starts relatively high and drops quickly and continues to decrease steadily as the epochs increases - For the validation loss, similar patterns occur with the epoch increasing with really minor spikes - Training and Validation accuracy - For the training accuracy, it increases sharply at the beginning and continues to increase gradually, indicates effective lerarning - For the validation accuracy, similar pattern is seen but with a similar accuracy and minor spikes For CNN 2, - Training and Validation loss - For the training loss, it starts relatively high and drops quickly and continues to decrease steadily as the epochs increases - For the validation loss, similar patterns occur with the epoch increasing with really minor spikes - Training and Validation accuracy - For the training accuracy, it increases sharply at the beginning and continues to increase gradually, indicates effective lerarning - For the validation accuracy, similar pattern is seen but with a similar accuracy and minor spikes For VGG 16, - Training and Validation loss - For the training loss, it starts relatively high and drops quickly and continues to decrease steadily as the epochs increases - For the validation loss, similar patterns occur with the epoch increasing with really minor spikes - Training and Validation accuracy - For the training accuracy, it increases sharply at the beginning and continues to increase gradually, indicates effective lerarning - For the validation accuracy, similar pattern is seen but with a similar accuracy and minor spikes

Data Augmentation 2¶

  • Adjust the color of the iamge
  • Change the brightness
  • Change the saturation
  • Change the hue
In [37]:
def data_augmentation2(image):
    image = tf.image.random_brightness(image, max_delta=0.2) 
    image = tf.image.random_contrast(image, lower=0.5, upper=1.5)
    image = tf.image.random_saturation(image, lower=0.5, upper=1.5) 
    image = tf.image.random_hue(image, max_delta=0.2)
    return image

Process the train and validation data¶

In [38]:
train_ds_basic_2 = tf.data.Dataset.from_tensor_slices((X_train_big, y_train_big))
train_ds_basic_2 = train_ds_basic_2.map(preprocess2).batch(128).prefetch(tf.data.AUTOTUNE)

val_ds_basic_2 = tf.data.Dataset.from_tensor_slices((X_val_big, y_val_big))
val_ds_basic_2 = val_ds_basic_2.map(preprocess2).batch(128).prefetch(tf.data.AUTOTUNE)

Visualisation of the augmented dataset¶

In [39]:
visualisation(train_ds_basic_2,5,'(basic 2)','train')

Running the models¶

In [40]:
figures = []
models = models_array()
for i in range(len(models)):
    print(f'Running {model_names[i]}')
    results, fig = evaluator.model_evaluate( train_ds_basic_2, val_ds_basic_2 , models[i], base_hparams)
    results['Model Name'] = f'{model_names[i]} basic two'
    y_pred = models[i].predict(X_test_big)
    y_pred_classes = np.argmax(y_pred, axis=1)
    kappa = cohen_kappa_score(y_test_big, y_pred_classes)
    print("Cohen’s Kappa Score:", kappa)
    results['Kappa'] = kappa
    overall = pd.concat([overall, pd.DataFrame([results])], ignore_index=True)
    figures.append(fig)
    
Model: "sequential_16"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_11 (Conv2D)          (None, 128, 128, 32)      320       
                                                                 
 batch_normalization_11 (Bat  (None, 128, 128, 32)     128       
 chNormalization)                                                
                                                                 
 re_lu_11 (ReLU)             (None, 128, 128, 32)      0         
                                                                 
 conv2d_12 (Conv2D)          (None, 128, 128, 64)      18496     
                                                                 
 batch_normalization_12 (Bat  (None, 128, 128, 64)     256       
 chNormalization)                                                
                                                                 
 re_lu_12 (ReLU)             (None, 128, 128, 64)      0         
                                                                 
 conv2d_13 (Conv2D)          (None, 128, 128, 128)     73856     
                                                                 
 batch_normalization_13 (Bat  (None, 128, 128, 128)    512       
 chNormalization)                                                
                                                                 
 re_lu_13 (ReLU)             (None, 128, 128, 128)     0         
                                                                 
 conv2d_14 (Conv2D)          (None, 128, 128, 128)     147584    
                                                                 
 batch_normalization_14 (Bat  (None, 128, 128, 128)    512       
 chNormalization)                                                
                                                                 
 re_lu_14 (ReLU)             (None, 128, 128, 128)     0         
                                                                 
 conv2d_15 (Conv2D)          (None, 128, 128, 128)     147584    
                                                                 
 batch_normalization_15 (Bat  (None, 128, 128, 128)    512       
 chNormalization)                                                
                                                                 
 re_lu_15 (ReLU)             (None, 128, 128, 128)     0         
                                                                 
 conv2d_16 (Conv2D)          (None, 128, 128, 128)     147584    
                                                                 
 batch_normalization_16 (Bat  (None, 128, 128, 128)    512       
 chNormalization)                                                
                                                                 
 re_lu_16 (ReLU)             (None, 128, 128, 128)     0         
                                                                 
 global_average_pooling2d_1   (None, 128)              0         
 (GlobalAveragePooling2D)                                        
                                                                 
 dense_1 (Dense)             (None, 256)               33024     
                                                                 
 dense_2 (Dense)             (None, 15)                3855      
                                                                 
=================================================================
Total params: 574,735
Trainable params: 573,519
Non-trainable params: 1,216
_________________________________________________________________
Model: "CNN_Baseline"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_17 (Conv2D)          (None, 128, 128, 64)      640       
                                                                 
 batch_normalization_17 (Bat  (None, 128, 128, 64)     256       
 chNormalization)                                                
                                                                 
 max_pooling2d_4 (MaxPooling  (None, 64, 64, 64)       0         
 2D)                                                             
                                                                 
 dropout_1 (Dropout)         (None, 64, 64, 64)        0         
                                                                 
 conv2d_18 (Conv2D)          (None, 64, 64, 128)       73856     
                                                                 
 batch_normalization_18 (Bat  (None, 64, 64, 128)      512       
 chNormalization)                                                
                                                                 
 max_pooling2d_5 (MaxPooling  (None, 32, 32, 128)      0         
 2D)                                                             
                                                                 
 dropout_2 (Dropout)         (None, 32, 32, 128)       0         
                                                                 
 conv2d_19 (Conv2D)          (None, 32, 32, 256)       295168    
                                                                 
 batch_normalization_19 (Bat  (None, 32, 32, 256)      1024      
 chNormalization)                                                
                                                                 
 max_pooling2d_6 (MaxPooling  (None, 16, 16, 256)      0         
 2D)                                                             
                                                                 
 dropout_3 (Dropout)         (None, 16, 16, 256)       0         
                                                                 
 flatten (Flatten)           (None, 65536)             0         
                                                                 
 dense_3 (Dense)             (None, 512)               33554944  
                                                                 
 batch_normalization_20 (Bat  (None, 512)              2048      
 chNormalization)                                                
                                                                 
 dropout_4 (Dropout)         (None, 512)               0         
                                                                 
 dense_4 (Dense)             (None, 128)               65664     
                                                                 
 batch_normalization_21 (Bat  (None, 128)              512       
 chNormalization)                                                
                                                                 
 dropout_5 (Dropout)         (None, 128)               0         
                                                                 
 dense_5 (Dense)             (None, 15)                1935      
                                                                 
=================================================================
Total params: 33,996,559
Trainable params: 33,994,383
Non-trainable params: 2,176
_________________________________________________________________
Running CNN
Epoch 1/100
71/71 [==============================] - 7s 79ms/step - loss: 2.7543 - accuracy: 0.2368 - val_loss: 5.0730 - val_accuracy: 0.1283 - lr: 0.0010
Epoch 2/100
71/71 [==============================] - 5s 74ms/step - loss: 1.8582 - accuracy: 0.4283 - val_loss: 2.3013 - val_accuracy: 0.2577 - lr: 0.0010
Epoch 3/100
71/71 [==============================] - 6s 77ms/step - loss: 1.4301 - accuracy: 0.5392 - val_loss: 2.6641 - val_accuracy: 0.2480 - lr: 0.0010
Epoch 4/100
71/71 [==============================] - 5s 72ms/step - loss: 1.1399 - accuracy: 0.6303 - val_loss: 2.8095 - val_accuracy: 0.2590 - lr: 0.0010
Epoch 5/100
71/71 [==============================] - 5s 75ms/step - loss: 0.8962 - accuracy: 0.7122 - val_loss: 4.5725 - val_accuracy: 0.2420 - lr: 0.0010
Epoch 6/100
71/71 [==============================] - 5s 74ms/step - loss: 0.7132 - accuracy: 0.7697 - val_loss: 3.2076 - val_accuracy: 0.2963 - lr: 0.0010
Epoch 7/100
71/71 [==============================] - 6s 77ms/step - loss: 0.5349 - accuracy: 0.8286 - val_loss: 4.6403 - val_accuracy: 0.2663 - lr: 0.0010
Epoch 8/100
71/71 [==============================] - 5s 76ms/step - loss: 0.3972 - accuracy: 0.8824 - val_loss: 0.8181 - val_accuracy: 0.7290 - lr: 1.0000e-04
Epoch 9/100
71/71 [==============================] - 5s 76ms/step - loss: 0.3435 - accuracy: 0.8981 - val_loss: 0.7740 - val_accuracy: 0.7383 - lr: 1.0000e-04
Epoch 10/100
71/71 [==============================] - 5s 75ms/step - loss: 0.3220 - accuracy: 0.9049 - val_loss: 0.7578 - val_accuracy: 0.7420 - lr: 1.0000e-04
Epoch 11/100
71/71 [==============================] - 5s 75ms/step - loss: 0.2926 - accuracy: 0.9191 - val_loss: 0.7743 - val_accuracy: 0.7450 - lr: 1.0000e-04
Epoch 12/100
71/71 [==============================] - 6s 78ms/step - loss: 0.2719 - accuracy: 0.9248 - val_loss: 0.7849 - val_accuracy: 0.7500 - lr: 1.0000e-04
Epoch 13/100
71/71 [==============================] - 5s 76ms/step - loss: 0.2413 - accuracy: 0.9381 - val_loss: 1.0420 - val_accuracy: 0.6857 - lr: 1.0000e-04
Epoch 14/100
71/71 [==============================] - 5s 73ms/step - loss: 0.2283 - accuracy: 0.9413 - val_loss: 0.7862 - val_accuracy: 0.7583 - lr: 1.0000e-04
Epoch 15/100
71/71 [==============================] - 5s 74ms/step - loss: 0.2143 - accuracy: 0.9466 - val_loss: 0.8091 - val_accuracy: 0.7430 - lr: 1.0000e-04
Epoch 16/100
71/71 [==============================] - 5s 75ms/step - loss: 0.2012 - accuracy: 0.9483 - val_loss: 0.9459 - val_accuracy: 0.7093 - lr: 1.0000e-05
Epoch 17/100
71/71 [==============================] - 5s 76ms/step - loss: 0.1884 - accuracy: 0.9527 - val_loss: 0.9369 - val_accuracy: 0.7147 - lr: 1.0000e-05
Epoch 18/100
71/71 [==============================] - 5s 76ms/step - loss: 0.1910 - accuracy: 0.9535 - val_loss: 1.0690 - val_accuracy: 0.6863 - lr: 1.0000e-05
Epoch 19/100
71/71 [==============================] - 5s 73ms/step - loss: 0.1889 - accuracy: 0.9548 - val_loss: 0.9572 - val_accuracy: 0.7107 - lr: 1.0000e-05
Epoch 20/100
71/71 [==============================] - 5s 75ms/step - loss: 0.1843 - accuracy: 0.9538 - val_loss: 0.8920 - val_accuracy: 0.7297 - lr: 1.0000e-05
Epoch 21/100
71/71 [==============================] - 5s 70ms/step - loss: 0.1862 - accuracy: 0.9505 - val_loss: 0.9494 - val_accuracy: 0.7143 - lr: 1.0000e-06
Epoch 22/100
71/71 [==============================] - 5s 76ms/step - loss: 0.1880 - accuracy: 0.9540 - val_loss: 0.9479 - val_accuracy: 0.7147 - lr: 1.0000e-06
Epoch 23/100
71/71 [==============================] - 5s 73ms/step - loss: 0.1897 - accuracy: 0.9543 - val_loss: 0.9380 - val_accuracy: 0.7177 - lr: 1.0000e-06
Epoch 24/100
71/71 [==============================] - 5s 74ms/step - loss: 0.1814 - accuracy: 0.9558 - val_loss: 0.9419 - val_accuracy: 0.7177 - lr: 1.0000e-06
94/94 [==============================] - 1s 4ms/step
Cohen’s Kappa Score: 0.7457142857142858
Running CNN2
Epoch 1/100
71/71 [==============================] - 19s 257ms/step - loss: 1.7964 - accuracy: 0.4401 - val_loss: 6.0117 - val_accuracy: 0.0657 - lr: 0.0010
Epoch 2/100
71/71 [==============================] - 18s 254ms/step - loss: 1.1835 - accuracy: 0.6338 - val_loss: 5.6493 - val_accuracy: 0.1140 - lr: 0.0010
Epoch 3/100
71/71 [==============================] - 18s 254ms/step - loss: 0.8984 - accuracy: 0.7221 - val_loss: 11.3253 - val_accuracy: 0.1260 - lr: 0.0010
Epoch 4/100
71/71 [==============================] - 18s 254ms/step - loss: 0.7184 - accuracy: 0.7778 - val_loss: 2.5909 - val_accuracy: 0.3733 - lr: 0.0010
Epoch 5/100
71/71 [==============================] - 18s 255ms/step - loss: 0.5847 - accuracy: 0.8226 - val_loss: 4.5517 - val_accuracy: 0.3460 - lr: 0.0010
Epoch 6/100
71/71 [==============================] - 18s 255ms/step - loss: 0.5094 - accuracy: 0.8456 - val_loss: 5.2987 - val_accuracy: 0.3873 - lr: 0.0010
Epoch 7/100
71/71 [==============================] - 18s 255ms/step - loss: 0.4177 - accuracy: 0.8739 - val_loss: 4.0588 - val_accuracy: 0.4057 - lr: 0.0010
Epoch 8/100
71/71 [==============================] - 18s 255ms/step - loss: 0.3448 - accuracy: 0.8959 - val_loss: 4.6854 - val_accuracy: 0.3503 - lr: 0.0010
Epoch 9/100
71/71 [==============================] - 18s 255ms/step - loss: 0.3041 - accuracy: 0.9089 - val_loss: 11.9348 - val_accuracy: 0.1960 - lr: 0.0010
Epoch 10/100
71/71 [==============================] - 18s 255ms/step - loss: 0.2260 - accuracy: 0.9434 - val_loss: 0.8644 - val_accuracy: 0.7350 - lr: 1.0000e-04
Epoch 11/100
71/71 [==============================] - 18s 255ms/step - loss: 0.1910 - accuracy: 0.9539 - val_loss: 0.3603 - val_accuracy: 0.8903 - lr: 1.0000e-04
Epoch 12/100
71/71 [==============================] - 18s 255ms/step - loss: 0.1792 - accuracy: 0.9571 - val_loss: 0.3042 - val_accuracy: 0.9040 - lr: 1.0000e-04
Epoch 13/100
71/71 [==============================] - 18s 255ms/step - loss: 0.1702 - accuracy: 0.9611 - val_loss: 0.2840 - val_accuracy: 0.9110 - lr: 1.0000e-04
Epoch 14/100
71/71 [==============================] - 18s 255ms/step - loss: 0.1626 - accuracy: 0.9632 - val_loss: 0.2680 - val_accuracy: 0.9177 - lr: 1.0000e-04
Epoch 15/100
71/71 [==============================] - 18s 255ms/step - loss: 0.1559 - accuracy: 0.9653 - val_loss: 0.2570 - val_accuracy: 0.9253 - lr: 1.0000e-04
Epoch 16/100
71/71 [==============================] - 18s 255ms/step - loss: 0.1497 - accuracy: 0.9663 - val_loss: 0.2494 - val_accuracy: 0.9297 - lr: 1.0000e-04
Epoch 17/100
71/71 [==============================] - 18s 255ms/step - loss: 0.1439 - accuracy: 0.9685 - val_loss: 0.2360 - val_accuracy: 0.9363 - lr: 1.0000e-04
Epoch 18/100
71/71 [==============================] - 18s 255ms/step - loss: 0.1385 - accuracy: 0.9701 - val_loss: 0.2277 - val_accuracy: 0.9390 - lr: 1.0000e-04
Epoch 19/100
71/71 [==============================] - 18s 255ms/step - loss: 0.1333 - accuracy: 0.9720 - val_loss: 0.2222 - val_accuracy: 0.9417 - lr: 1.0000e-04
Epoch 20/100
71/71 [==============================] - 18s 256ms/step - loss: 0.1284 - accuracy: 0.9731 - val_loss: 0.2147 - val_accuracy: 0.9440 - lr: 1.0000e-04
Epoch 21/100
71/71 [==============================] - 18s 255ms/step - loss: 0.1237 - accuracy: 0.9746 - val_loss: 0.2126 - val_accuracy: 0.9430 - lr: 1.0000e-04
Epoch 22/100
71/71 [==============================] - 18s 255ms/step - loss: 0.1191 - accuracy: 0.9760 - val_loss: 0.2055 - val_accuracy: 0.9447 - lr: 1.0000e-04
Epoch 23/100
71/71 [==============================] - 18s 255ms/step - loss: 0.1147 - accuracy: 0.9772 - val_loss: 0.2003 - val_accuracy: 0.9463 - lr: 1.0000e-04
Epoch 24/100
71/71 [==============================] - 18s 255ms/step - loss: 0.1105 - accuracy: 0.9781 - val_loss: 0.2009 - val_accuracy: 0.9460 - lr: 1.0000e-04
Epoch 25/100
71/71 [==============================] - 18s 255ms/step - loss: 0.1065 - accuracy: 0.9794 - val_loss: 0.2008 - val_accuracy: 0.9450 - lr: 1.0000e-04
Epoch 26/100
71/71 [==============================] - 18s 255ms/step - loss: 0.1026 - accuracy: 0.9804 - val_loss: 0.2002 - val_accuracy: 0.9443 - lr: 1.0000e-04
Epoch 27/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0989 - accuracy: 0.9808 - val_loss: 0.1943 - val_accuracy: 0.9460 - lr: 1.0000e-04
Epoch 28/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0952 - accuracy: 0.9819 - val_loss: 0.1970 - val_accuracy: 0.9440 - lr: 1.0000e-04
Epoch 29/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0919 - accuracy: 0.9825 - val_loss: 0.1914 - val_accuracy: 0.9467 - lr: 1.0000e-04
Epoch 30/100
71/71 [==============================] - 18s 256ms/step - loss: 0.0884 - accuracy: 0.9834 - val_loss: 0.1802 - val_accuracy: 0.9513 - lr: 1.0000e-04
Epoch 31/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0852 - accuracy: 0.9846 - val_loss: 0.1695 - val_accuracy: 0.9543 - lr: 1.0000e-04
Epoch 32/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0820 - accuracy: 0.9856 - val_loss: 0.1675 - val_accuracy: 0.9543 - lr: 1.0000e-04
Epoch 33/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0790 - accuracy: 0.9865 - val_loss: 0.1653 - val_accuracy: 0.9540 - lr: 1.0000e-04
Epoch 34/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0762 - accuracy: 0.9878 - val_loss: 0.1624 - val_accuracy: 0.9567 - lr: 1.0000e-04
Epoch 35/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0735 - accuracy: 0.9884 - val_loss: 0.1689 - val_accuracy: 0.9533 - lr: 1.0000e-04
Epoch 36/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0708 - accuracy: 0.9890 - val_loss: 0.1613 - val_accuracy: 0.9553 - lr: 1.0000e-04
Epoch 37/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0682 - accuracy: 0.9899 - val_loss: 0.1687 - val_accuracy: 0.9513 - lr: 1.0000e-04
Epoch 38/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0656 - accuracy: 0.9907 - val_loss: 0.1536 - val_accuracy: 0.9570 - lr: 1.0000e-04
Epoch 39/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0633 - accuracy: 0.9916 - val_loss: 0.1686 - val_accuracy: 0.9517 - lr: 1.0000e-04
Epoch 40/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0608 - accuracy: 0.9924 - val_loss: 0.1594 - val_accuracy: 0.9553 - lr: 1.0000e-04
Epoch 41/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0587 - accuracy: 0.9930 - val_loss: 0.1732 - val_accuracy: 0.9490 - lr: 1.0000e-04
Epoch 42/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0564 - accuracy: 0.9937 - val_loss: 0.1567 - val_accuracy: 0.9560 - lr: 1.0000e-04
Epoch 43/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0544 - accuracy: 0.9940 - val_loss: 0.1717 - val_accuracy: 0.9507 - lr: 1.0000e-04
Epoch 44/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0490 - accuracy: 0.9937 - val_loss: 0.0988 - val_accuracy: 0.9753 - lr: 1.0000e-05
Epoch 45/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0457 - accuracy: 0.9947 - val_loss: 0.0976 - val_accuracy: 0.9757 - lr: 1.0000e-05
Epoch 46/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0449 - accuracy: 0.9953 - val_loss: 0.0964 - val_accuracy: 0.9753 - lr: 1.0000e-05
Epoch 47/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0443 - accuracy: 0.9957 - val_loss: 0.0953 - val_accuracy: 0.9767 - lr: 1.0000e-05
Epoch 48/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0439 - accuracy: 0.9957 - val_loss: 0.0949 - val_accuracy: 0.9770 - lr: 1.0000e-05
Epoch 49/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0436 - accuracy: 0.9956 - val_loss: 0.0944 - val_accuracy: 0.9773 - lr: 1.0000e-05
Epoch 50/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0433 - accuracy: 0.9958 - val_loss: 0.0939 - val_accuracy: 0.9773 - lr: 1.0000e-05
Epoch 51/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0430 - accuracy: 0.9958 - val_loss: 0.0938 - val_accuracy: 0.9773 - lr: 1.0000e-05
Epoch 52/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0427 - accuracy: 0.9959 - val_loss: 0.0935 - val_accuracy: 0.9777 - lr: 1.0000e-05
Epoch 53/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0424 - accuracy: 0.9961 - val_loss: 0.0932 - val_accuracy: 0.9777 - lr: 1.0000e-05
Epoch 54/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0421 - accuracy: 0.9962 - val_loss: 0.0929 - val_accuracy: 0.9777 - lr: 1.0000e-05
Epoch 55/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0419 - accuracy: 0.9965 - val_loss: 0.0926 - val_accuracy: 0.9777 - lr: 1.0000e-05
Epoch 56/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0416 - accuracy: 0.9966 - val_loss: 0.0923 - val_accuracy: 0.9777 - lr: 1.0000e-05
Epoch 57/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0413 - accuracy: 0.9967 - val_loss: 0.0920 - val_accuracy: 0.9773 - lr: 1.0000e-05
Epoch 58/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0411 - accuracy: 0.9968 - val_loss: 0.0918 - val_accuracy: 0.9773 - lr: 1.0000e-05
Epoch 59/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0408 - accuracy: 0.9968 - val_loss: 0.0916 - val_accuracy: 0.9773 - lr: 1.0000e-05
Epoch 60/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0406 - accuracy: 0.9969 - val_loss: 0.0914 - val_accuracy: 0.9773 - lr: 1.0000e-05
Epoch 61/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0403 - accuracy: 0.9969 - val_loss: 0.0911 - val_accuracy: 0.9777 - lr: 1.0000e-05
Epoch 62/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0400 - accuracy: 0.9969 - val_loss: 0.0909 - val_accuracy: 0.9780 - lr: 1.0000e-05
Epoch 63/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0398 - accuracy: 0.9970 - val_loss: 0.0907 - val_accuracy: 0.9780 - lr: 1.0000e-05
Epoch 64/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0395 - accuracy: 0.9970 - val_loss: 0.0905 - val_accuracy: 0.9780 - lr: 1.0000e-05
Epoch 65/100
71/71 [==============================] - 18s 256ms/step - loss: 0.0393 - accuracy: 0.9970 - val_loss: 0.0903 - val_accuracy: 0.9787 - lr: 1.0000e-05
Epoch 66/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0390 - accuracy: 0.9970 - val_loss: 0.0901 - val_accuracy: 0.9787 - lr: 1.0000e-05
Epoch 67/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0388 - accuracy: 0.9970 - val_loss: 0.0899 - val_accuracy: 0.9790 - lr: 1.0000e-05
Epoch 68/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0385 - accuracy: 0.9970 - val_loss: 0.0897 - val_accuracy: 0.9787 - lr: 1.0000e-05
Epoch 69/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0383 - accuracy: 0.9972 - val_loss: 0.0894 - val_accuracy: 0.9783 - lr: 1.0000e-05
Epoch 70/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0381 - accuracy: 0.9972 - val_loss: 0.0891 - val_accuracy: 0.9790 - lr: 1.0000e-05
Epoch 71/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0378 - accuracy: 0.9973 - val_loss: 0.0892 - val_accuracy: 0.9783 - lr: 1.0000e-05
Epoch 72/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0376 - accuracy: 0.9975 - val_loss: 0.0888 - val_accuracy: 0.9790 - lr: 1.0000e-05
Epoch 73/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0373 - accuracy: 0.9975 - val_loss: 0.0886 - val_accuracy: 0.9787 - lr: 1.0000e-05
Epoch 74/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0371 - accuracy: 0.9975 - val_loss: 0.0884 - val_accuracy: 0.9787 - lr: 1.0000e-05
Epoch 75/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0369 - accuracy: 0.9976 - val_loss: 0.0883 - val_accuracy: 0.9790 - lr: 1.0000e-05
Epoch 76/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0366 - accuracy: 0.9977 - val_loss: 0.0880 - val_accuracy: 0.9787 - lr: 1.0000e-05
Epoch 77/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0364 - accuracy: 0.9977 - val_loss: 0.0878 - val_accuracy: 0.9787 - lr: 1.0000e-05
94/94 [==============================] - 2s 16ms/step
Cohen’s Kappa Score: 0.9792857142857143
Running VGG_Baseline
Epoch 1/100
71/71 [==============================] - 8s 81ms/step - loss: 3.7130 - accuracy: 0.3398 - val_loss: 194.5654 - val_accuracy: 0.0943 - lr: 0.0100
Epoch 2/100
71/71 [==============================] - 5s 74ms/step - loss: 2.2185 - accuracy: 0.4681 - val_loss: 4.6053 - val_accuracy: 0.1047 - lr: 0.0100
Epoch 3/100
71/71 [==============================] - 5s 76ms/step - loss: 1.9828 - accuracy: 0.5214 - val_loss: 10.5360 - val_accuracy: 0.0900 - lr: 0.0100
Epoch 4/100
71/71 [==============================] - 5s 76ms/step - loss: 1.8593 - accuracy: 0.5576 - val_loss: 22.4761 - val_accuracy: 0.0703 - lr: 0.0100
Epoch 5/100
71/71 [==============================] - 5s 75ms/step - loss: 1.7820 - accuracy: 0.5923 - val_loss: 3.6513 - val_accuracy: 0.1763 - lr: 0.0100
Epoch 6/100
71/71 [==============================] - 5s 75ms/step - loss: 1.7037 - accuracy: 0.6108 - val_loss: 4.1829 - val_accuracy: 0.1913 - lr: 0.0100
Epoch 7/100
71/71 [==============================] - 5s 73ms/step - loss: 1.7713 - accuracy: 0.5986 - val_loss: 3.9473 - val_accuracy: 0.2930 - lr: 0.0100
Epoch 8/100
71/71 [==============================] - 5s 76ms/step - loss: 1.6542 - accuracy: 0.6253 - val_loss: 3.2740 - val_accuracy: 0.2683 - lr: 0.0100
Epoch 9/100
71/71 [==============================] - 5s 74ms/step - loss: 1.5704 - accuracy: 0.6324 - val_loss: 3.8286 - val_accuracy: 0.2593 - lr: 0.0100
Epoch 10/100
71/71 [==============================] - 5s 74ms/step - loss: 1.4600 - accuracy: 0.6678 - val_loss: 3.3423 - val_accuracy: 0.1970 - lr: 0.0100
Epoch 11/100
71/71 [==============================] - 5s 74ms/step - loss: 1.4761 - accuracy: 0.6649 - val_loss: 6.4964 - val_accuracy: 0.1470 - lr: 0.0100
Epoch 12/100
71/71 [==============================] - 5s 75ms/step - loss: 1.3569 - accuracy: 0.6890 - val_loss: 2.6734 - val_accuracy: 0.3307 - lr: 0.0100
Epoch 13/100
71/71 [==============================] - 5s 76ms/step - loss: 1.3813 - accuracy: 0.6940 - val_loss: 2.7017 - val_accuracy: 0.3423 - lr: 0.0100
Epoch 14/100
71/71 [==============================] - 5s 75ms/step - loss: 1.3036 - accuracy: 0.7157 - val_loss: 3.1741 - val_accuracy: 0.2693 - lr: 0.0100
Epoch 15/100
71/71 [==============================] - 5s 77ms/step - loss: 1.2600 - accuracy: 0.7199 - val_loss: 7.1163 - val_accuracy: 0.2060 - lr: 0.0100
Epoch 16/100
71/71 [==============================] - 5s 75ms/step - loss: 1.2386 - accuracy: 0.7373 - val_loss: 2.6765 - val_accuracy: 0.4473 - lr: 0.0100
Epoch 17/100
71/71 [==============================] - 5s 76ms/step - loss: 1.1756 - accuracy: 0.7475 - val_loss: 4.8358 - val_accuracy: 0.1360 - lr: 0.0100
Epoch 18/100
71/71 [==============================] - 6s 78ms/step - loss: 0.9255 - accuracy: 0.8226 - val_loss: 1.8162 - val_accuracy: 0.5363 - lr: 1.0000e-03
Epoch 19/100
71/71 [==============================] - 6s 78ms/step - loss: 0.7819 - accuracy: 0.8586 - val_loss: 1.8051 - val_accuracy: 0.5377 - lr: 1.0000e-03
Epoch 20/100
71/71 [==============================] - 5s 77ms/step - loss: 0.7028 - accuracy: 0.8743 - val_loss: 1.3342 - val_accuracy: 0.6633 - lr: 1.0000e-03
Epoch 21/100
71/71 [==============================] - 5s 75ms/step - loss: 0.6385 - accuracy: 0.8886 - val_loss: 1.2191 - val_accuracy: 0.7057 - lr: 1.0000e-03
Epoch 22/100
71/71 [==============================] - 6s 77ms/step - loss: 0.5833 - accuracy: 0.8992 - val_loss: 1.1689 - val_accuracy: 0.7313 - lr: 1.0000e-03
Epoch 23/100
71/71 [==============================] - 5s 74ms/step - loss: 0.5474 - accuracy: 0.9055 - val_loss: 0.9183 - val_accuracy: 0.7980 - lr: 1.0000e-03
Epoch 24/100
71/71 [==============================] - 5s 76ms/step - loss: 0.5104 - accuracy: 0.9147 - val_loss: 0.7994 - val_accuracy: 0.8360 - lr: 1.0000e-03
Epoch 25/100
71/71 [==============================] - 5s 74ms/step - loss: 0.4744 - accuracy: 0.9221 - val_loss: 0.8342 - val_accuracy: 0.8097 - lr: 1.0000e-03
Epoch 26/100
71/71 [==============================] - 5s 76ms/step - loss: 0.4518 - accuracy: 0.9266 - val_loss: 0.6993 - val_accuracy: 0.8503 - lr: 1.0000e-03
Epoch 27/100
71/71 [==============================] - 5s 74ms/step - loss: 0.4277 - accuracy: 0.9334 - val_loss: 0.6207 - val_accuracy: 0.8803 - lr: 1.0000e-03
Epoch 28/100
71/71 [==============================] - 5s 74ms/step - loss: 0.4027 - accuracy: 0.9421 - val_loss: 0.7179 - val_accuracy: 0.8340 - lr: 1.0000e-03
Epoch 29/100
71/71 [==============================] - 5s 76ms/step - loss: 0.4019 - accuracy: 0.9368 - val_loss: 0.7397 - val_accuracy: 0.8200 - lr: 1.0000e-03
Epoch 30/100
71/71 [==============================] - 5s 75ms/step - loss: 0.3659 - accuracy: 0.9478 - val_loss: 0.7312 - val_accuracy: 0.8233 - lr: 1.0000e-03
Epoch 31/100
71/71 [==============================] - 5s 75ms/step - loss: 0.3441 - accuracy: 0.9526 - val_loss: 0.6943 - val_accuracy: 0.8330 - lr: 1.0000e-03
Epoch 32/100
71/71 [==============================] - 5s 73ms/step - loss: 0.3498 - accuracy: 0.9489 - val_loss: 0.7788 - val_accuracy: 0.8100 - lr: 1.0000e-03
Epoch 33/100
71/71 [==============================] - 5s 74ms/step - loss: 0.2989 - accuracy: 0.9673 - val_loss: 0.4253 - val_accuracy: 0.9267 - lr: 1.0000e-04
Epoch 34/100
71/71 [==============================] - 5s 76ms/step - loss: 0.2694 - accuracy: 0.9781 - val_loss: 0.3433 - val_accuracy: 0.9530 - lr: 1.0000e-04
Epoch 35/100
71/71 [==============================] - 5s 77ms/step - loss: 0.2608 - accuracy: 0.9806 - val_loss: 0.3283 - val_accuracy: 0.9587 - lr: 1.0000e-04
Epoch 36/100
71/71 [==============================] - 5s 75ms/step - loss: 0.2576 - accuracy: 0.9816 - val_loss: 0.3252 - val_accuracy: 0.9600 - lr: 1.0000e-04
Epoch 37/100
71/71 [==============================] - 5s 74ms/step - loss: 0.2502 - accuracy: 0.9845 - val_loss: 0.3223 - val_accuracy: 0.9600 - lr: 1.0000e-04
Epoch 38/100
71/71 [==============================] - 5s 75ms/step - loss: 0.2461 - accuracy: 0.9845 - val_loss: 0.3159 - val_accuracy: 0.9630 - lr: 1.0000e-04
Epoch 39/100
71/71 [==============================] - 5s 76ms/step - loss: 0.2439 - accuracy: 0.9834 - val_loss: 0.3078 - val_accuracy: 0.9630 - lr: 1.0000e-04
Epoch 40/100
71/71 [==============================] - 5s 76ms/step - loss: 0.2371 - accuracy: 0.9865 - val_loss: 0.3088 - val_accuracy: 0.9633 - lr: 1.0000e-04
Epoch 41/100
71/71 [==============================] - 5s 77ms/step - loss: 0.2338 - accuracy: 0.9863 - val_loss: 0.3034 - val_accuracy: 0.9653 - lr: 1.0000e-04
Epoch 42/100
71/71 [==============================] - 5s 75ms/step - loss: 0.2303 - accuracy: 0.9884 - val_loss: 0.3031 - val_accuracy: 0.9633 - lr: 1.0000e-04
Epoch 43/100
71/71 [==============================] - 6s 78ms/step - loss: 0.2288 - accuracy: 0.9878 - val_loss: 0.2998 - val_accuracy: 0.9640 - lr: 1.0000e-04
Epoch 44/100
71/71 [==============================] - 5s 76ms/step - loss: 0.2261 - accuracy: 0.9884 - val_loss: 0.2980 - val_accuracy: 0.9653 - lr: 1.0000e-04
Epoch 45/100
71/71 [==============================] - 5s 74ms/step - loss: 0.2204 - accuracy: 0.9909 - val_loss: 0.2892 - val_accuracy: 0.9667 - lr: 1.0000e-04
Epoch 46/100
71/71 [==============================] - 5s 73ms/step - loss: 0.2182 - accuracy: 0.9904 - val_loss: 0.2898 - val_accuracy: 0.9653 - lr: 1.0000e-04
Epoch 47/100
71/71 [==============================] - 5s 75ms/step - loss: 0.2195 - accuracy: 0.9879 - val_loss: 0.2866 - val_accuracy: 0.9657 - lr: 1.0000e-04
Epoch 48/100
71/71 [==============================] - 5s 73ms/step - loss: 0.2139 - accuracy: 0.9907 - val_loss: 0.2889 - val_accuracy: 0.9643 - lr: 1.0000e-04
Epoch 49/100
71/71 [==============================] - 5s 75ms/step - loss: 0.2108 - accuracy: 0.9915 - val_loss: 0.2854 - val_accuracy: 0.9653 - lr: 1.0000e-04
Epoch 50/100
71/71 [==============================] - 5s 76ms/step - loss: 0.2077 - accuracy: 0.9918 - val_loss: 0.2826 - val_accuracy: 0.9643 - lr: 1.0000e-04
Epoch 51/100
71/71 [==============================] - 5s 76ms/step - loss: 0.2054 - accuracy: 0.9918 - val_loss: 0.2812 - val_accuracy: 0.9657 - lr: 1.0000e-04
Epoch 52/100
71/71 [==============================] - 5s 75ms/step - loss: 0.2013 - accuracy: 0.9930 - val_loss: 0.2790 - val_accuracy: 0.9673 - lr: 1.0000e-04
Epoch 53/100
71/71 [==============================] - 5s 75ms/step - loss: 0.1994 - accuracy: 0.9937 - val_loss: 0.2765 - val_accuracy: 0.9657 - lr: 1.0000e-04
Epoch 54/100
71/71 [==============================] - 6s 78ms/step - loss: 0.1991 - accuracy: 0.9932 - val_loss: 0.2702 - val_accuracy: 0.9687 - lr: 1.0000e-04
Epoch 55/100
71/71 [==============================] - 5s 75ms/step - loss: 0.1954 - accuracy: 0.9947 - val_loss: 0.2685 - val_accuracy: 0.9683 - lr: 1.0000e-04
Epoch 56/100
71/71 [==============================] - 5s 74ms/step - loss: 0.1917 - accuracy: 0.9945 - val_loss: 0.2659 - val_accuracy: 0.9700 - lr: 1.0000e-04
Epoch 57/100
71/71 [==============================] - 5s 74ms/step - loss: 0.1887 - accuracy: 0.9960 - val_loss: 0.2707 - val_accuracy: 0.9667 - lr: 1.0000e-04
Epoch 58/100
71/71 [==============================] - 5s 76ms/step - loss: 0.1864 - accuracy: 0.9952 - val_loss: 0.2661 - val_accuracy: 0.9677 - lr: 1.0000e-04
Epoch 59/100
71/71 [==============================] - 6s 77ms/step - loss: 0.1862 - accuracy: 0.9952 - val_loss: 0.2745 - val_accuracy: 0.9660 - lr: 1.0000e-04
Epoch 60/100
71/71 [==============================] - 5s 73ms/step - loss: 0.1829 - accuracy: 0.9962 - val_loss: 0.2648 - val_accuracy: 0.9693 - lr: 1.0000e-04
Epoch 61/100
71/71 [==============================] - 5s 76ms/step - loss: 0.1799 - accuracy: 0.9966 - val_loss: 0.2597 - val_accuracy: 0.9687 - lr: 1.0000e-04
Epoch 62/100
71/71 [==============================] - 5s 74ms/step - loss: 0.1778 - accuracy: 0.9966 - val_loss: 0.2646 - val_accuracy: 0.9650 - lr: 1.0000e-04
Epoch 63/100
71/71 [==============================] - 5s 73ms/step - loss: 0.1785 - accuracy: 0.9956 - val_loss: 0.2587 - val_accuracy: 0.9693 - lr: 1.0000e-04
Epoch 64/100
71/71 [==============================] - 5s 74ms/step - loss: 0.1738 - accuracy: 0.9971 - val_loss: 0.2594 - val_accuracy: 0.9683 - lr: 1.0000e-04
Epoch 65/100
71/71 [==============================] - 6s 77ms/step - loss: 0.1714 - accuracy: 0.9975 - val_loss: 0.2574 - val_accuracy: 0.9710 - lr: 1.0000e-04
Epoch 66/100
71/71 [==============================] - 5s 73ms/step - loss: 0.1702 - accuracy: 0.9969 - val_loss: 0.2644 - val_accuracy: 0.9680 - lr: 1.0000e-04
Epoch 67/100
71/71 [==============================] - 5s 77ms/step - loss: 0.1675 - accuracy: 0.9979 - val_loss: 0.2539 - val_accuracy: 0.9673 - lr: 1.0000e-04
Epoch 68/100
71/71 [==============================] - 5s 76ms/step - loss: 0.1661 - accuracy: 0.9979 - val_loss: 0.2544 - val_accuracy: 0.9677 - lr: 1.0000e-04
Epoch 69/100
71/71 [==============================] - 6s 77ms/step - loss: 0.1638 - accuracy: 0.9979 - val_loss: 0.2470 - val_accuracy: 0.9697 - lr: 1.0000e-04
Epoch 70/100
71/71 [==============================] - 5s 77ms/step - loss: 0.1623 - accuracy: 0.9978 - val_loss: 0.2524 - val_accuracy: 0.9710 - lr: 1.0000e-04
Epoch 71/100
71/71 [==============================] - 5s 75ms/step - loss: 0.1598 - accuracy: 0.9990 - val_loss: 0.2436 - val_accuracy: 0.9713 - lr: 1.0000e-04
Epoch 72/100
71/71 [==============================] - 5s 77ms/step - loss: 0.1595 - accuracy: 0.9983 - val_loss: 0.2326 - val_accuracy: 0.9720 - lr: 1.0000e-04
Epoch 73/100
71/71 [==============================] - 5s 77ms/step - loss: 0.1572 - accuracy: 0.9988 - val_loss: 0.2475 - val_accuracy: 0.9700 - lr: 1.0000e-04
Epoch 74/100
71/71 [==============================] - 5s 74ms/step - loss: 0.1592 - accuracy: 0.9972 - val_loss: 0.2484 - val_accuracy: 0.9710 - lr: 1.0000e-04
Epoch 75/100
71/71 [==============================] - 5s 75ms/step - loss: 0.1551 - accuracy: 0.9984 - val_loss: 0.2431 - val_accuracy: 0.9717 - lr: 1.0000e-04
Epoch 76/100
71/71 [==============================] - 5s 76ms/step - loss: 0.1518 - accuracy: 0.9986 - val_loss: 0.2388 - val_accuracy: 0.9713 - lr: 1.0000e-04
Epoch 77/100
71/71 [==============================] - 5s 76ms/step - loss: 0.1523 - accuracy: 0.9982 - val_loss: 0.2381 - val_accuracy: 0.9723 - lr: 1.0000e-04
Epoch 78/100
71/71 [==============================] - 5s 76ms/step - loss: 0.1483 - accuracy: 0.9991 - val_loss: 0.2286 - val_accuracy: 0.9743 - lr: 1.0000e-05
Epoch 79/100
71/71 [==============================] - 5s 75ms/step - loss: 0.1482 - accuracy: 0.9988 - val_loss: 0.2259 - val_accuracy: 0.9753 - lr: 1.0000e-05
Epoch 80/100
71/71 [==============================] - 5s 76ms/step - loss: 0.1474 - accuracy: 0.9994 - val_loss: 0.2251 - val_accuracy: 0.9750 - lr: 1.0000e-05
Epoch 81/100
71/71 [==============================] - 5s 74ms/step - loss: 0.1458 - accuracy: 0.9998 - val_loss: 0.2241 - val_accuracy: 0.9750 - lr: 1.0000e-05
Epoch 82/100
71/71 [==============================] - 5s 77ms/step - loss: 0.1465 - accuracy: 0.9991 - val_loss: 0.2249 - val_accuracy: 0.9743 - lr: 1.0000e-05
Epoch 83/100
71/71 [==============================] - 5s 75ms/step - loss: 0.1465 - accuracy: 0.9992 - val_loss: 0.2227 - val_accuracy: 0.9753 - lr: 1.0000e-05
Epoch 84/100
71/71 [==============================] - 6s 78ms/step - loss: 0.1454 - accuracy: 0.9994 - val_loss: 0.2222 - val_accuracy: 0.9743 - lr: 1.0000e-05
Epoch 85/100
71/71 [==============================] - 5s 75ms/step - loss: 0.1453 - accuracy: 0.9994 - val_loss: 0.2246 - val_accuracy: 0.9747 - lr: 1.0000e-05
Epoch 86/100
71/71 [==============================] - 5s 75ms/step - loss: 0.1446 - accuracy: 0.9996 - val_loss: 0.2227 - val_accuracy: 0.9753 - lr: 1.0000e-05
Epoch 87/100
71/71 [==============================] - 5s 75ms/step - loss: 0.1455 - accuracy: 0.9996 - val_loss: 0.2218 - val_accuracy: 0.9760 - lr: 1.0000e-05
Epoch 88/100
71/71 [==============================] - 6s 78ms/step - loss: 0.1441 - accuracy: 0.9998 - val_loss: 0.2208 - val_accuracy: 0.9763 - lr: 1.0000e-05
Epoch 89/100
71/71 [==============================] - 5s 72ms/step - loss: 0.1451 - accuracy: 0.9994 - val_loss: 0.2222 - val_accuracy: 0.9753 - lr: 1.0000e-05
Epoch 90/100
71/71 [==============================] - 5s 75ms/step - loss: 0.1445 - accuracy: 0.9993 - val_loss: 0.2208 - val_accuracy: 0.9757 - lr: 1.0000e-05
Epoch 91/100
71/71 [==============================] - 5s 75ms/step - loss: 0.1444 - accuracy: 0.9994 - val_loss: 0.2206 - val_accuracy: 0.9760 - lr: 1.0000e-05
Epoch 92/100
71/71 [==============================] - 5s 75ms/step - loss: 0.1428 - accuracy: 0.9999 - val_loss: 0.2210 - val_accuracy: 0.9760 - lr: 1.0000e-05
Epoch 93/100
71/71 [==============================] - 5s 76ms/step - loss: 0.1432 - accuracy: 0.9999 - val_loss: 0.2221 - val_accuracy: 0.9750 - lr: 1.0000e-05
Epoch 94/100
71/71 [==============================] - 5s 77ms/step - loss: 0.1428 - accuracy: 0.9999 - val_loss: 0.2221 - val_accuracy: 0.9753 - lr: 1.0000e-05
Epoch 95/100
71/71 [==============================] - 6s 78ms/step - loss: 0.1427 - accuracy: 0.9998 - val_loss: 0.2234 - val_accuracy: 0.9753 - lr: 1.0000e-05
Epoch 96/100
71/71 [==============================] - 5s 76ms/step - loss: 0.1433 - accuracy: 0.9991 - val_loss: 0.2223 - val_accuracy: 0.9757 - lr: 1.0000e-05
Epoch 97/100
71/71 [==============================] - 5s 76ms/step - loss: 0.1427 - accuracy: 0.9997 - val_loss: 0.2228 - val_accuracy: 0.9757 - lr: 1.0000e-06
Epoch 98/100
71/71 [==============================] - 5s 75ms/step - loss: 0.1427 - accuracy: 0.9994 - val_loss: 0.2229 - val_accuracy: 0.9753 - lr: 1.0000e-06
94/94 [==============================] - 1s 5ms/step
Cohen’s Kappa Score: 0.9746428571428571

Analysing the Graph¶

In [41]:
for fig in figures:
    fig()
Things Observed
    For CNN, - Training and Validation loss - For the training loss, it starts relatively high and drops quickly and continues to decrease steadily as the epochs increases - For the validation loss, similar patterns occur with the epoch increasing with really minor spikes - Training and Validation accuracy - For the training accuracy, it increases sharply at the beginning and continues to increase gradually, indicates effective lerarning - For the validation accuracy, similar pattern is seen but with a similar accuracy and minor spikes For CNN 2, - Training and Validation loss - For the training loss, it starts relatively high and drops quickly and continues to decrease steadily as the epochs increases - For the validation loss, similar patterns occur with the epoch increasing with really minor spikes - Training and Validation accuracy - For the training accuracy, it increases sharply at the beginning and continues to increase gradually, indicates effective lerarning - For the validation accuracy, similar pattern is seen but with a similar accuracy and minor spikes For VGG 16, - Training and Validation loss - For the training loss, it starts relatively high and drops quickly and continues to decrease steadily as the epochs increases - For the validation loss, similar patterns occur with the epoch increasing with really minor spikes - Training and Validation accuracy - For the training accuracy, it increases sharply at the beginning and continues to increase gradually, indicates effective lerarning - For the validation accuracy, similar pattern is seen but with a similar accuracy and minor spikes

Analysing the Results¶

In [42]:
overall.iloc[-3:]
Out[42]:
Model Name Epochs Batch Size Train Loss Test Loss Train Acc Test Acc Kappa Comments
3 CNN basic two 24 128 0.228279 0.786181 0.941294 0.758333 0.745714 NaN
4 CNN2 basic two 77 128 0.038791 0.089882 0.997009 0.979000 0.979286 NaN
5 VGG_Baseline basic two 98 128 0.144072 0.220837 0.999778 0.976333 0.974643 NaN

CutMix data augmentation¶

Cutmix Data Augmentation is

  • Cut and paste random patches between the training images
  • Ground truth labels are mixed in proportion to the area of patches in the images
  • It increases localisation ability by making the model to focus on less discriminative parts of the object being classified and hence is also well suited for tasks like object detection

image.png

In [28]:
def preprocess_data(image, label):
  image = tf.expand_dims(image, -1)
  return image, label

def sample_beta_distribution(size, concentration_0=300, concentration_1=0.1):
  gamma_1_sample = tf.random.gamma(shape=[size], alpha=concentration_1)
  gamma_2_sample = tf.random.gamma(shape=[size], alpha=concentration_0)
  return gamma_1_sample / (gamma_1_sample + gamma_2_sample)


@tf.function
def get_box(lambda_value):
  cut_rat = tf.math.sqrt(1.0 - lambda_value)
  image_wh = (31,31,1)[0]
  cut_wh = image_wh * cut_rat  # rw
  cut_wh = tf.cast(cut_wh, tf.int32)

  cut_x = tf.random.uniform((1,), minval=0, maxval=image_wh, dtype=tf.int32)  # rx
  cut_y = tf.random.uniform((1,), minval=0, maxval=image_wh, dtype=tf.int32)  # ry

  boundaryx1 = tf.clip_by_value(cut_x[0] - cut_wh // 2, 0, image_wh)
  boundaryy1 = tf.clip_by_value(cut_y[0] - cut_wh // 2, 0, image_wh)
  bbx2 = tf.clip_by_value(cut_x[0] + cut_wh // 2, 0, image_wh)
  bby2 = tf.clip_by_value(cut_y[0] + cut_wh // 2, 0, image_wh)

  target_h = bby2 - boundaryy1
  if target_h == 0:
      target_h += 1

  target_w = bbx2 - boundaryx1
  if target_w == 0:
      target_w += 1

  return boundaryx1, boundaryy1, target_h, target_w


@tf.function
def cutmix(train_ds_one, train_ds_two):
  (image1, label1), (image2, label2) = train_ds_one, train_ds_two
  image_size = (128,128,1)[0]
  alpha = [1]
  beta = [1]
  lambda_value = sample_beta_distribution(1, alpha, beta)
  lambda_value = lambda_value[0][0]
  boundaryx1, boundaryy1, target_h, target_w = get_box(lambda_value)
  crop2 = tf.image.crop_to_bounding_box(
      image2, boundaryy1, boundaryx1, target_h, target_w
  )
  image2 = tf.image.pad_to_bounding_box(
      crop2, boundaryy1, boundaryx1, image_size, image_size
  )
  crop1 = tf.image.crop_to_bounding_box(
      image1, boundaryy1, boundaryx1, target_h, target_w
  )
  img1 = tf.image.pad_to_bounding_box(
      crop1, boundaryy1, boundaryx1, image_size, image_size
  )

  image1 = image1 - img1
  image = image1 + image2
  lambda_value = 1 - (target_w * target_h) / (image_size * image_size)
  lambda_value = tf.cast(lambda_value, tf.float32)
  label = lambda_value * label1 + (1 - lambda_value) * label2
  return image, label

Set up the train and validation data¶

In [29]:
train_ds_one = tf.data.Dataset.from_tensor_slices((X_train_big, y_train_big)).shuffle(2048).map(preprocess_data, num_parallel_calls=tf.data.AUTOTUNE)
train_ds_two = tf.data.Dataset.from_tensor_slices((X_train_big, y_train_big)).shuffle(2048).map(preprocess_data, num_parallel_calls=tf.data.AUTOTUNE)

val_ds_cutmix = tf.data.Dataset.from_tensor_slices((X_val_big, y_val_big))
val_ds_cutmix = val_ds_cutmix.map(preprocess_data, num_parallel_calls=tf.data.AUTOTUNE).batch(128).prefetch(tf.data.AUTOTUNE)

train_ds_cutmix = tf.data.Dataset.zip((train_ds_one, train_ds_two))
train_ds_cutmix = (
    train_ds_cutmix.shuffle(1024)
    .map(cutmix, num_parallel_calls=tf.data.AUTOTUNE)
    .batch(128)
    .prefetch(tf.data.AUTOTUNE)
)
train_ds_cutmix
image_batch, label_batch = next(iter(train_ds_cutmix))

Visualisation of the augmented dataset¶

In [45]:
plt.figure(figsize=(10, 10))
for i in range(15):
    ax = plt.subplot(3, 5, i + 1)
    plt.title(labels_dict[np.argmax(label_batch[i])])
    plt.imshow(tf.squeeze(image_batch[i]), cmap="gray")
    plt.axis("off")
plt.show()

Running the models¶

In [46]:
models = models_array()
figures_cutmix = []

for i in range(len(models)):
    print(f'Running {model_names[i]}')
    results, fig = evaluator.model_evaluate( train_ds_cutmix, val_ds_cutmix , models[i], base_hparams)
    results['Model Name'] = f'{model_names[i]} cutmix'
    y_pred = models[i].predict(X_test_big)
    y_pred_classes = np.argmax(y_pred, axis=1)
    
    kappa = cohen_kappa_score(y_test_big, y_pred_classes)
    print("Cohen’s Kappa Score:", kappa)
    results['Kappa'] = kappa
    overall = pd.concat([overall, pd.DataFrame([results])], ignore_index=True)
    figures_cutmix.append(fig)
    
Model: "sequential_16"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_11 (Conv2D)          (None, 128, 128, 32)      320       
                                                                 
 batch_normalization_11 (Bat  (None, 128, 128, 32)     128       
 chNormalization)                                                
                                                                 
 re_lu_11 (ReLU)             (None, 128, 128, 32)      0         
                                                                 
 conv2d_12 (Conv2D)          (None, 128, 128, 64)      18496     
                                                                 
 batch_normalization_12 (Bat  (None, 128, 128, 64)     256       
 chNormalization)                                                
                                                                 
 re_lu_12 (ReLU)             (None, 128, 128, 64)      0         
                                                                 
 conv2d_13 (Conv2D)          (None, 128, 128, 128)     73856     
                                                                 
 batch_normalization_13 (Bat  (None, 128, 128, 128)    512       
 chNormalization)                                                
                                                                 
 re_lu_13 (ReLU)             (None, 128, 128, 128)     0         
                                                                 
 conv2d_14 (Conv2D)          (None, 128, 128, 128)     147584    
                                                                 
 batch_normalization_14 (Bat  (None, 128, 128, 128)    512       
 chNormalization)                                                
                                                                 
 re_lu_14 (ReLU)             (None, 128, 128, 128)     0         
                                                                 
 conv2d_15 (Conv2D)          (None, 128, 128, 128)     147584    
                                                                 
 batch_normalization_15 (Bat  (None, 128, 128, 128)    512       
 chNormalization)                                                
                                                                 
 re_lu_15 (ReLU)             (None, 128, 128, 128)     0         
                                                                 
 conv2d_16 (Conv2D)          (None, 128, 128, 128)     147584    
                                                                 
 batch_normalization_16 (Bat  (None, 128, 128, 128)    512       
 chNormalization)                                                
                                                                 
 re_lu_16 (ReLU)             (None, 128, 128, 128)     0         
                                                                 
 global_average_pooling2d_1   (None, 128)              0         
 (GlobalAveragePooling2D)                                        
                                                                 
 dense_1 (Dense)             (None, 256)               33024     
                                                                 
 dense_2 (Dense)             (None, 15)                3855      
                                                                 
=================================================================
Total params: 574,735
Trainable params: 573,519
Non-trainable params: 1,216
_________________________________________________________________
Model: "CNN_Baseline"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_17 (Conv2D)          (None, 128, 128, 64)      640       
                                                                 
 batch_normalization_17 (Bat  (None, 128, 128, 64)     256       
 chNormalization)                                                
                                                                 
 max_pooling2d_4 (MaxPooling  (None, 64, 64, 64)       0         
 2D)                                                             
                                                                 
 dropout_1 (Dropout)         (None, 64, 64, 64)        0         
                                                                 
 conv2d_18 (Conv2D)          (None, 64, 64, 128)       73856     
                                                                 
 batch_normalization_18 (Bat  (None, 64, 64, 128)      512       
 chNormalization)                                                
                                                                 
 max_pooling2d_5 (MaxPooling  (None, 32, 32, 128)      0         
 2D)                                                             
                                                                 
 dropout_2 (Dropout)         (None, 32, 32, 128)       0         
                                                                 
 conv2d_19 (Conv2D)          (None, 32, 32, 256)       295168    
                                                                 
 batch_normalization_19 (Bat  (None, 32, 32, 256)      1024      
 chNormalization)                                                
                                                                 
 max_pooling2d_6 (MaxPooling  (None, 16, 16, 256)      0         
 2D)                                                             
                                                                 
 dropout_3 (Dropout)         (None, 16, 16, 256)       0         
                                                                 
 flatten (Flatten)           (None, 65536)             0         
                                                                 
 dense_3 (Dense)             (None, 512)               33554944  
                                                                 
 batch_normalization_20 (Bat  (None, 512)              2048      
 chNormalization)                                                
                                                                 
 dropout_4 (Dropout)         (None, 512)               0         
                                                                 
 dense_4 (Dense)             (None, 128)               65664     
                                                                 
 batch_normalization_21 (Bat  (None, 128)              512       
 chNormalization)                                                
                                                                 
 dropout_5 (Dropout)         (None, 128)               0         
                                                                 
 dense_5 (Dense)             (None, 15)                1935      
                                                                 
=================================================================
Total params: 33,996,559
Trainable params: 33,994,383
Non-trainable params: 2,176
_________________________________________________________________
Running CNN
Epoch 1/100
71/71 [==============================] - 5s 55ms/step - loss: 2.3340 - accuracy: 0.3566 - val_loss: 3.0086 - val_accuracy: 0.1610 - lr: 0.0010
Epoch 2/100
71/71 [==============================] - 3s 48ms/step - loss: 1.3583 - accuracy: 0.6002 - val_loss: 2.6224 - val_accuracy: 0.1763 - lr: 0.0010
Epoch 3/100
71/71 [==============================] - 3s 48ms/step - loss: 0.9679 - accuracy: 0.7311 - val_loss: 3.6664 - val_accuracy: 0.1900 - lr: 0.0010
Epoch 4/100
71/71 [==============================] - 4s 49ms/step - loss: 0.7119 - accuracy: 0.8118 - val_loss: 1.8500 - val_accuracy: 0.4633 - lr: 0.0010
Epoch 5/100
71/71 [==============================] - 4s 50ms/step - loss: 0.5490 - accuracy: 0.8738 - val_loss: 0.7664 - val_accuracy: 0.7443 - lr: 0.0010
Epoch 6/100
71/71 [==============================] - 3s 48ms/step - loss: 0.4413 - accuracy: 0.9045 - val_loss: 0.6372 - val_accuracy: 0.8043 - lr: 0.0010
Epoch 7/100
71/71 [==============================] - 3s 48ms/step - loss: 0.3908 - accuracy: 0.9256 - val_loss: 0.5686 - val_accuracy: 0.8243 - lr: 0.0010
Epoch 8/100
71/71 [==============================] - 3s 47ms/step - loss: 0.3374 - accuracy: 0.9451 - val_loss: 1.8107 - val_accuracy: 0.6000 - lr: 0.0010
Epoch 9/100
71/71 [==============================] - 3s 47ms/step - loss: 0.3108 - accuracy: 0.9548 - val_loss: 0.9040 - val_accuracy: 0.7207 - lr: 0.0010
Epoch 10/100
71/71 [==============================] - 3s 48ms/step - loss: 0.2908 - accuracy: 0.9620 - val_loss: 0.3898 - val_accuracy: 0.8763 - lr: 0.0010
Epoch 11/100
71/71 [==============================] - 3s 48ms/step - loss: 0.2621 - accuracy: 0.9728 - val_loss: 0.3589 - val_accuracy: 0.8880 - lr: 0.0010
Epoch 12/100
71/71 [==============================] - 3s 47ms/step - loss: 0.2448 - accuracy: 0.9795 - val_loss: 0.3828 - val_accuracy: 0.8843 - lr: 0.0010
Epoch 13/100
71/71 [==============================] - 3s 47ms/step - loss: 0.2302 - accuracy: 0.9839 - val_loss: 0.9990 - val_accuracy: 0.7400 - lr: 0.0010
Epoch 14/100
71/71 [==============================] - 3s 48ms/step - loss: 0.2194 - accuracy: 0.9874 - val_loss: 0.3257 - val_accuracy: 0.9053 - lr: 0.0010
Epoch 15/100
71/71 [==============================] - 3s 47ms/step - loss: 0.2139 - accuracy: 0.9883 - val_loss: 0.3066 - val_accuracy: 0.9040 - lr: 0.0010
Epoch 16/100
71/71 [==============================] - 3s 47ms/step - loss: 0.2055 - accuracy: 0.9910 - val_loss: 0.5151 - val_accuracy: 0.8453 - lr: 0.0010
Epoch 17/100
71/71 [==============================] - 3s 47ms/step - loss: 0.2040 - accuracy: 0.9907 - val_loss: 0.7521 - val_accuracy: 0.7897 - lr: 0.0010
Epoch 18/100
71/71 [==============================] - 3s 47ms/step - loss: 0.1961 - accuracy: 0.9937 - val_loss: 0.5512 - val_accuracy: 0.8383 - lr: 0.0010
Epoch 19/100
71/71 [==============================] - 3s 47ms/step - loss: 0.1928 - accuracy: 0.9947 - val_loss: 0.7449 - val_accuracy: 0.8017 - lr: 0.0010
Epoch 20/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1873 - accuracy: 0.9958 - val_loss: 0.2979 - val_accuracy: 0.9097 - lr: 0.0010
Epoch 21/100
71/71 [==============================] - 3s 47ms/step - loss: 0.1863 - accuracy: 0.9951 - val_loss: 0.4013 - val_accuracy: 0.8800 - lr: 0.0010
Epoch 22/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1828 - accuracy: 0.9963 - val_loss: 0.2471 - val_accuracy: 0.9270 - lr: 0.0010
Epoch 23/100
71/71 [==============================] - 3s 47ms/step - loss: 0.1918 - accuracy: 0.9939 - val_loss: 3.1083 - val_accuracy: 0.4700 - lr: 0.0010
Epoch 24/100
71/71 [==============================] - 3s 47ms/step - loss: 0.1806 - accuracy: 0.9963 - val_loss: 0.3412 - val_accuracy: 0.8960 - lr: 0.0010
Epoch 25/100
71/71 [==============================] - 3s 47ms/step - loss: 0.1772 - accuracy: 0.9971 - val_loss: 0.4389 - val_accuracy: 0.8650 - lr: 0.0010
Epoch 26/100
71/71 [==============================] - 3s 47ms/step - loss: 0.1742 - accuracy: 0.9979 - val_loss: 0.6846 - val_accuracy: 0.8060 - lr: 0.0010
Epoch 27/100
71/71 [==============================] - 3s 47ms/step - loss: 0.1732 - accuracy: 0.9976 - val_loss: 0.6614 - val_accuracy: 0.8153 - lr: 0.0010
Epoch 28/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1691 - accuracy: 0.9988 - val_loss: 0.2010 - val_accuracy: 0.9373 - lr: 1.0000e-04
Epoch 29/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1650 - accuracy: 0.9988 - val_loss: 0.1795 - val_accuracy: 0.9430 - lr: 1.0000e-04
Epoch 30/100
71/71 [==============================] - 3s 47ms/step - loss: 0.1651 - accuracy: 0.9986 - val_loss: 0.2967 - val_accuracy: 0.9063 - lr: 1.0000e-04
Epoch 31/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1651 - accuracy: 0.9991 - val_loss: 0.1787 - val_accuracy: 0.9437 - lr: 1.0000e-04
Epoch 32/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1645 - accuracy: 0.9989 - val_loss: 0.1789 - val_accuracy: 0.9457 - lr: 1.0000e-04
Epoch 33/100
71/71 [==============================] - 3s 47ms/step - loss: 0.1642 - accuracy: 0.9992 - val_loss: 0.1862 - val_accuracy: 0.9420 - lr: 1.0000e-04
Epoch 34/100
71/71 [==============================] - 3s 47ms/step - loss: 0.1648 - accuracy: 0.9993 - val_loss: 0.1792 - val_accuracy: 0.9430 - lr: 1.0000e-04
Epoch 35/100
71/71 [==============================] - 3s 47ms/step - loss: 0.1640 - accuracy: 0.9991 - val_loss: 0.1871 - val_accuracy: 0.9407 - lr: 1.0000e-04
Epoch 36/100
71/71 [==============================] - 3s 47ms/step - loss: 0.1630 - accuracy: 0.9989 - val_loss: 0.2339 - val_accuracy: 0.9273 - lr: 1.0000e-04
Epoch 37/100
71/71 [==============================] - 3s 47ms/step - loss: 0.1617 - accuracy: 0.9994 - val_loss: 0.2055 - val_accuracy: 0.9347 - lr: 1.0000e-05
Epoch 38/100
71/71 [==============================] - 3s 47ms/step - loss: 0.1620 - accuracy: 0.9994 - val_loss: 0.1950 - val_accuracy: 0.9407 - lr: 1.0000e-05
Epoch 39/100
71/71 [==============================] - 3s 47ms/step - loss: 0.1614 - accuracy: 0.9999 - val_loss: 0.1952 - val_accuracy: 0.9413 - lr: 1.0000e-05
Epoch 40/100
71/71 [==============================] - 3s 47ms/step - loss: 0.1617 - accuracy: 0.9994 - val_loss: 0.1947 - val_accuracy: 0.9403 - lr: 1.0000e-05
Epoch 41/100
71/71 [==============================] - 3s 47ms/step - loss: 0.1617 - accuracy: 0.9991 - val_loss: 0.2031 - val_accuracy: 0.9397 - lr: 1.0000e-05
Epoch 42/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1625 - accuracy: 0.9994 - val_loss: 0.2011 - val_accuracy: 0.9397 - lr: 1.0000e-06
94/94 [==============================] - 1s 4ms/step
Cohen’s Kappa Score: 0.9510714285714286
Running CNN2
Epoch 1/100
71/71 [==============================] - 19s 257ms/step - loss: 1.7878 - accuracy: 0.4587 - val_loss: 4.0731 - val_accuracy: 0.0987 - lr: 0.0010
Epoch 2/100
71/71 [==============================] - 18s 253ms/step - loss: 1.1878 - accuracy: 0.6583 - val_loss: 5.1206 - val_accuracy: 0.1483 - lr: 0.0010
Epoch 3/100
71/71 [==============================] - 18s 254ms/step - loss: 0.9155 - accuracy: 0.7545 - val_loss: 4.5687 - val_accuracy: 0.2820 - lr: 0.0010
Epoch 4/100
71/71 [==============================] - 18s 254ms/step - loss: 0.7063 - accuracy: 0.8220 - val_loss: 1.8533 - val_accuracy: 0.5067 - lr: 0.0010
Epoch 5/100
71/71 [==============================] - 18s 254ms/step - loss: 0.6006 - accuracy: 0.8556 - val_loss: 2.2554 - val_accuracy: 0.4253 - lr: 0.0010
Epoch 6/100
71/71 [==============================] - 18s 254ms/step - loss: 0.5081 - accuracy: 0.8889 - val_loss: 3.2624 - val_accuracy: 0.4637 - lr: 0.0010
Epoch 7/100
71/71 [==============================] - 18s 254ms/step - loss: 0.4479 - accuracy: 0.9098 - val_loss: 1.3525 - val_accuracy: 0.6073 - lr: 0.0010
Epoch 8/100
71/71 [==============================] - 18s 254ms/step - loss: 0.4045 - accuracy: 0.9233 - val_loss: 2.1651 - val_accuracy: 0.4577 - lr: 0.0010
Epoch 9/100
71/71 [==============================] - 18s 254ms/step - loss: 0.3781 - accuracy: 0.9380 - val_loss: 2.5053 - val_accuracy: 0.3750 - lr: 0.0010
Epoch 10/100
71/71 [==============================] - 18s 254ms/step - loss: 0.3464 - accuracy: 0.9468 - val_loss: 0.8032 - val_accuracy: 0.7140 - lr: 0.0010
Epoch 11/100
71/71 [==============================] - 18s 254ms/step - loss: 0.3256 - accuracy: 0.9533 - val_loss: 0.7822 - val_accuracy: 0.7643 - lr: 0.0010
Epoch 12/100
71/71 [==============================] - 18s 254ms/step - loss: 0.3040 - accuracy: 0.9578 - val_loss: 1.6804 - val_accuracy: 0.6890 - lr: 0.0010
Epoch 13/100
71/71 [==============================] - 18s 254ms/step - loss: 0.2898 - accuracy: 0.9640 - val_loss: 2.8188 - val_accuracy: 0.4077 - lr: 0.0010
Epoch 14/100
71/71 [==============================] - 18s 254ms/step - loss: 0.2721 - accuracy: 0.9693 - val_loss: 4.6301 - val_accuracy: 0.3590 - lr: 0.0010
Epoch 15/100
71/71 [==============================] - 18s 254ms/step - loss: 0.2645 - accuracy: 0.9735 - val_loss: 0.7805 - val_accuracy: 0.7577 - lr: 0.0010
Epoch 16/100
71/71 [==============================] - 18s 254ms/step - loss: 0.2440 - accuracy: 0.9803 - val_loss: 1.3518 - val_accuracy: 0.6323 - lr: 0.0010
Epoch 17/100
71/71 [==============================] - 18s 254ms/step - loss: 0.2388 - accuracy: 0.9818 - val_loss: 0.4260 - val_accuracy: 0.8790 - lr: 0.0010
Epoch 18/100
71/71 [==============================] - 18s 254ms/step - loss: 0.2506 - accuracy: 0.9757 - val_loss: 0.7564 - val_accuracy: 0.7567 - lr: 0.0010
Epoch 19/100
71/71 [==============================] - 18s 254ms/step - loss: 0.2279 - accuracy: 0.9838 - val_loss: 2.2328 - val_accuracy: 0.5347 - lr: 0.0010
Epoch 20/100
71/71 [==============================] - 18s 254ms/step - loss: 0.2298 - accuracy: 0.9828 - val_loss: 0.7749 - val_accuracy: 0.7757 - lr: 0.0010
Epoch 21/100
71/71 [==============================] - 18s 254ms/step - loss: 0.2244 - accuracy: 0.9839 - val_loss: 0.4375 - val_accuracy: 0.8803 - lr: 0.0010
Epoch 22/100
71/71 [==============================] - 18s 254ms/step - loss: 0.2107 - accuracy: 0.9886 - val_loss: 0.2771 - val_accuracy: 0.9187 - lr: 0.0010
Epoch 23/100
71/71 [==============================] - 18s 254ms/step - loss: 0.2021 - accuracy: 0.9906 - val_loss: 0.8245 - val_accuracy: 0.7517 - lr: 0.0010
Epoch 24/100
71/71 [==============================] - 18s 254ms/step - loss: 0.2057 - accuracy: 0.9894 - val_loss: 0.2404 - val_accuracy: 0.9307 - lr: 0.0010
Epoch 25/100
71/71 [==============================] - 18s 254ms/step - loss: 0.2060 - accuracy: 0.9889 - val_loss: 1.0036 - val_accuracy: 0.7393 - lr: 0.0010
Epoch 26/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1999 - accuracy: 0.9912 - val_loss: 0.4259 - val_accuracy: 0.8793 - lr: 0.0010
Epoch 27/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1857 - accuracy: 0.9947 - val_loss: 1.4373 - val_accuracy: 0.6787 - lr: 0.0010
Epoch 28/100
71/71 [==============================] - 18s 254ms/step - loss: 0.2016 - accuracy: 0.9888 - val_loss: 0.5908 - val_accuracy: 0.8473 - lr: 0.0010
Epoch 29/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1914 - accuracy: 0.9928 - val_loss: 0.4179 - val_accuracy: 0.8927 - lr: 0.0010
Epoch 30/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1720 - accuracy: 0.9979 - val_loss: 0.0722 - val_accuracy: 0.9897 - lr: 1.0000e-04
Epoch 31/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1636 - accuracy: 0.9984 - val_loss: 0.0673 - val_accuracy: 0.9907 - lr: 1.0000e-04
Epoch 32/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1632 - accuracy: 0.9988 - val_loss: 0.0700 - val_accuracy: 0.9890 - lr: 1.0000e-04
Epoch 33/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1612 - accuracy: 0.9988 - val_loss: 0.0641 - val_accuracy: 0.9883 - lr: 1.0000e-04
Epoch 34/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1611 - accuracy: 0.9992 - val_loss: 0.0592 - val_accuracy: 0.9900 - lr: 1.0000e-04
Epoch 35/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1612 - accuracy: 0.9989 - val_loss: 0.0574 - val_accuracy: 0.9907 - lr: 1.0000e-04
Epoch 36/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1594 - accuracy: 0.9993 - val_loss: 0.0624 - val_accuracy: 0.9903 - lr: 1.0000e-04
Epoch 37/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1591 - accuracy: 0.9992 - val_loss: 0.0639 - val_accuracy: 0.9907 - lr: 1.0000e-04
Epoch 38/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1585 - accuracy: 0.9994 - val_loss: 0.0549 - val_accuracy: 0.9907 - lr: 1.0000e-04
Epoch 39/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1602 - accuracy: 0.9992 - val_loss: 0.0611 - val_accuracy: 0.9903 - lr: 1.0000e-04
Epoch 40/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1596 - accuracy: 0.9989 - val_loss: 0.0600 - val_accuracy: 0.9920 - lr: 1.0000e-04
Epoch 41/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1590 - accuracy: 0.9990 - val_loss: 0.0574 - val_accuracy: 0.9903 - lr: 1.0000e-04
Epoch 42/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1578 - accuracy: 0.9993 - val_loss: 0.0556 - val_accuracy: 0.9923 - lr: 1.0000e-04
Epoch 43/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1572 - accuracy: 0.9993 - val_loss: 0.0581 - val_accuracy: 0.9910 - lr: 1.0000e-04
Epoch 44/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1558 - accuracy: 0.9998 - val_loss: 0.0519 - val_accuracy: 0.9920 - lr: 1.0000e-05
Epoch 45/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1556 - accuracy: 0.9997 - val_loss: 0.0497 - val_accuracy: 0.9913 - lr: 1.0000e-05
Epoch 46/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1543 - accuracy: 0.9996 - val_loss: 0.0497 - val_accuracy: 0.9917 - lr: 1.0000e-05
Epoch 47/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1564 - accuracy: 0.9994 - val_loss: 0.0498 - val_accuracy: 0.9917 - lr: 1.0000e-05
Epoch 48/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1548 - accuracy: 0.9992 - val_loss: 0.0496 - val_accuracy: 0.9920 - lr: 1.0000e-05
Epoch 49/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1534 - accuracy: 0.9997 - val_loss: 0.0491 - val_accuracy: 0.9920 - lr: 1.0000e-05
Epoch 50/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1553 - accuracy: 0.9994 - val_loss: 0.0506 - val_accuracy: 0.9917 - lr: 1.0000e-05
Epoch 51/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1548 - accuracy: 0.9994 - val_loss: 0.0501 - val_accuracy: 0.9920 - lr: 1.0000e-05
Epoch 52/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1552 - accuracy: 0.9996 - val_loss: 0.0498 - val_accuracy: 0.9923 - lr: 1.0000e-05
94/94 [==============================] - 2s 16ms/step
Cohen’s Kappa Score: 0.9907142857142857
Running VGG_Baseline
Epoch 1/100
71/71 [==============================] - 6s 65ms/step - loss: 3.9520 - accuracy: 0.2846 - val_loss: 7.1408 - val_accuracy: 0.0770 - lr: 0.0100
Epoch 2/100
71/71 [==============================] - 4s 59ms/step - loss: 2.3396 - accuracy: 0.4279 - val_loss: 7.3036 - val_accuracy: 0.0740 - lr: 0.0100
Epoch 3/100
71/71 [==============================] - 4s 59ms/step - loss: 2.0839 - accuracy: 0.4915 - val_loss: 4.7925 - val_accuracy: 0.0863 - lr: 0.0100
Epoch 4/100
71/71 [==============================] - 4s 59ms/step - loss: 1.9794 - accuracy: 0.5269 - val_loss: 3.1535 - val_accuracy: 0.1473 - lr: 0.0100
Epoch 5/100
71/71 [==============================] - 4s 58ms/step - loss: 1.7999 - accuracy: 0.5850 - val_loss: 8.2560 - val_accuracy: 0.0573 - lr: 0.0100
Epoch 6/100
71/71 [==============================] - 4s 58ms/step - loss: 1.7733 - accuracy: 0.5974 - val_loss: 4.9414 - val_accuracy: 0.0677 - lr: 0.0100
Epoch 7/100
71/71 [==============================] - 4s 58ms/step - loss: 1.6442 - accuracy: 0.6539 - val_loss: 6.0240 - val_accuracy: 0.0907 - lr: 0.0100
Epoch 8/100
71/71 [==============================] - 4s 58ms/step - loss: 1.5646 - accuracy: 0.6693 - val_loss: 3.7842 - val_accuracy: 0.1440 - lr: 0.0100
Epoch 9/100
71/71 [==============================] - 4s 59ms/step - loss: 1.4531 - accuracy: 0.7086 - val_loss: 3.9984 - val_accuracy: 0.2040 - lr: 0.0100
Epoch 10/100
71/71 [==============================] - 4s 59ms/step - loss: 1.1131 - accuracy: 0.8107 - val_loss: 2.8915 - val_accuracy: 0.2390 - lr: 1.0000e-03
Epoch 11/100
71/71 [==============================] - 4s 59ms/step - loss: 0.9129 - accuracy: 0.8522 - val_loss: 2.3875 - val_accuracy: 0.2973 - lr: 1.0000e-03
Epoch 12/100
71/71 [==============================] - 4s 59ms/step - loss: 0.8237 - accuracy: 0.8748 - val_loss: 2.2035 - val_accuracy: 0.3167 - lr: 1.0000e-03
Epoch 13/100
71/71 [==============================] - 4s 58ms/step - loss: 0.7508 - accuracy: 0.8865 - val_loss: 2.3202 - val_accuracy: 0.3017 - lr: 1.0000e-03
Epoch 14/100
71/71 [==============================] - 4s 58ms/step - loss: 0.6863 - accuracy: 0.9042 - val_loss: 2.1940 - val_accuracy: 0.2853 - lr: 1.0000e-03
Epoch 15/100
71/71 [==============================] - 4s 59ms/step - loss: 0.6453 - accuracy: 0.9165 - val_loss: 2.0312 - val_accuracy: 0.3660 - lr: 1.0000e-03
Epoch 16/100
71/71 [==============================] - 4s 59ms/step - loss: 0.6132 - accuracy: 0.9238 - val_loss: 1.7689 - val_accuracy: 0.4973 - lr: 1.0000e-03
Epoch 17/100
71/71 [==============================] - 4s 59ms/step - loss: 0.5879 - accuracy: 0.9315 - val_loss: 1.5576 - val_accuracy: 0.5430 - lr: 1.0000e-03
Epoch 18/100
71/71 [==============================] - 4s 59ms/step - loss: 0.5633 - accuracy: 0.9385 - val_loss: 1.1693 - val_accuracy: 0.6370 - lr: 1.0000e-03
Epoch 19/100
71/71 [==============================] - 4s 58ms/step - loss: 0.5197 - accuracy: 0.9484 - val_loss: 1.4228 - val_accuracy: 0.5770 - lr: 1.0000e-03
Epoch 20/100
71/71 [==============================] - 4s 58ms/step - loss: 0.5020 - accuracy: 0.9548 - val_loss: 1.6494 - val_accuracy: 0.5170 - lr: 1.0000e-03
Epoch 21/100
71/71 [==============================] - 4s 59ms/step - loss: 0.5114 - accuracy: 0.9466 - val_loss: 0.9779 - val_accuracy: 0.7420 - lr: 1.0000e-03
Epoch 22/100
71/71 [==============================] - 4s 58ms/step - loss: 0.4705 - accuracy: 0.9616 - val_loss: 0.9594 - val_accuracy: 0.7287 - lr: 1.0000e-03
Epoch 23/100
71/71 [==============================] - 4s 58ms/step - loss: 0.4707 - accuracy: 0.9609 - val_loss: 0.9918 - val_accuracy: 0.7350 - lr: 1.0000e-03
Epoch 24/100
71/71 [==============================] - 4s 59ms/step - loss: 0.4676 - accuracy: 0.9589 - val_loss: 0.8780 - val_accuracy: 0.7703 - lr: 1.0000e-03
Epoch 25/100
71/71 [==============================] - 4s 58ms/step - loss: 0.4302 - accuracy: 0.9701 - val_loss: 1.0563 - val_accuracy: 0.7163 - lr: 1.0000e-03
Epoch 26/100
71/71 [==============================] - 4s 58ms/step - loss: 0.4041 - accuracy: 0.9777 - val_loss: 1.1323 - val_accuracy: 0.6427 - lr: 1.0000e-03
Epoch 27/100
71/71 [==============================] - 4s 58ms/step - loss: 0.3991 - accuracy: 0.9764 - val_loss: 1.0908 - val_accuracy: 0.6693 - lr: 1.0000e-03
Epoch 28/100
71/71 [==============================] - 4s 59ms/step - loss: 0.3942 - accuracy: 0.9776 - val_loss: 0.7115 - val_accuracy: 0.8223 - lr: 1.0000e-03
Epoch 29/100
71/71 [==============================] - 4s 58ms/step - loss: 0.3892 - accuracy: 0.9770 - val_loss: 1.2553 - val_accuracy: 0.6257 - lr: 1.0000e-03
Epoch 30/100
71/71 [==============================] - 4s 59ms/step - loss: 0.3763 - accuracy: 0.9792 - val_loss: 1.2234 - val_accuracy: 0.6350 - lr: 1.0000e-03
Epoch 31/100
71/71 [==============================] - 4s 59ms/step - loss: 0.3709 - accuracy: 0.9821 - val_loss: 0.7015 - val_accuracy: 0.8110 - lr: 1.0000e-03
Epoch 32/100
71/71 [==============================] - 4s 58ms/step - loss: 0.3900 - accuracy: 0.9746 - val_loss: 0.9373 - val_accuracy: 0.7887 - lr: 1.0000e-03
Epoch 33/100
71/71 [==============================] - 4s 59ms/step - loss: 0.3748 - accuracy: 0.9785 - val_loss: 0.5147 - val_accuracy: 0.8863 - lr: 1.0000e-03
Epoch 34/100
71/71 [==============================] - 4s 58ms/step - loss: 0.3625 - accuracy: 0.9813 - val_loss: 0.7600 - val_accuracy: 0.7913 - lr: 1.0000e-03
Epoch 35/100
71/71 [==============================] - 4s 58ms/step - loss: 0.3564 - accuracy: 0.9822 - val_loss: 0.8401 - val_accuracy: 0.7590 - lr: 1.0000e-03
Epoch 36/100
71/71 [==============================] - 4s 59ms/step - loss: 0.3400 - accuracy: 0.9868 - val_loss: 0.8195 - val_accuracy: 0.7657 - lr: 1.0000e-03
Epoch 37/100
71/71 [==============================] - 4s 59ms/step - loss: 0.3415 - accuracy: 0.9848 - val_loss: 0.4422 - val_accuracy: 0.9000 - lr: 1.0000e-03
Epoch 38/100
71/71 [==============================] - 4s 58ms/step - loss: 0.3278 - accuracy: 0.9889 - val_loss: 0.7631 - val_accuracy: 0.7917 - lr: 1.0000e-03
Epoch 39/100
71/71 [==============================] - 4s 58ms/step - loss: 0.3382 - accuracy: 0.9876 - val_loss: 0.5782 - val_accuracy: 0.8523 - lr: 1.0000e-03
Epoch 40/100
71/71 [==============================] - 4s 58ms/step - loss: 0.3332 - accuracy: 0.9855 - val_loss: 1.6601 - val_accuracy: 0.5307 - lr: 1.0000e-03
Epoch 41/100
71/71 [==============================] - 4s 58ms/step - loss: 0.3257 - accuracy: 0.9872 - val_loss: 0.7446 - val_accuracy: 0.7850 - lr: 1.0000e-03
Epoch 42/100
71/71 [==============================] - 4s 58ms/step - loss: 0.3173 - accuracy: 0.9897 - val_loss: 0.5939 - val_accuracy: 0.8440 - lr: 1.0000e-03
Epoch 43/100
71/71 [==============================] - 4s 59ms/step - loss: 0.2896 - accuracy: 0.9976 - val_loss: 0.2230 - val_accuracy: 0.9797 - lr: 1.0000e-04
Epoch 44/100
71/71 [==============================] - 4s 59ms/step - loss: 0.2741 - accuracy: 0.9990 - val_loss: 0.1817 - val_accuracy: 0.9900 - lr: 1.0000e-04
Epoch 45/100
71/71 [==============================] - 4s 59ms/step - loss: 0.2711 - accuracy: 0.9994 - val_loss: 0.1700 - val_accuracy: 0.9927 - lr: 1.0000e-04
Epoch 46/100
71/71 [==============================] - 4s 59ms/step - loss: 0.2692 - accuracy: 0.9994 - val_loss: 0.1648 - val_accuracy: 0.9930 - lr: 1.0000e-04
Epoch 47/100
71/71 [==============================] - 4s 58ms/step - loss: 0.2652 - accuracy: 0.9996 - val_loss: 0.1620 - val_accuracy: 0.9923 - lr: 1.0000e-04
Epoch 48/100
71/71 [==============================] - 4s 59ms/step - loss: 0.2629 - accuracy: 0.9998 - val_loss: 0.1576 - val_accuracy: 0.9933 - lr: 1.0000e-04
Epoch 49/100
71/71 [==============================] - 4s 58ms/step - loss: 0.2604 - accuracy: 0.9998 - val_loss: 0.1589 - val_accuracy: 0.9933 - lr: 1.0000e-04
Epoch 50/100
71/71 [==============================] - 4s 58ms/step - loss: 0.2574 - accuracy: 0.9999 - val_loss: 0.1551 - val_accuracy: 0.9930 - lr: 1.0000e-04
Epoch 51/100
71/71 [==============================] - 4s 59ms/step - loss: 0.2553 - accuracy: 0.9999 - val_loss: 0.1503 - val_accuracy: 0.9937 - lr: 1.0000e-04
Epoch 52/100
71/71 [==============================] - 4s 58ms/step - loss: 0.2536 - accuracy: 0.9997 - val_loss: 0.1540 - val_accuracy: 0.9930 - lr: 1.0000e-04
Epoch 53/100
71/71 [==============================] - 4s 58ms/step - loss: 0.2509 - accuracy: 0.9999 - val_loss: 0.1509 - val_accuracy: 0.9933 - lr: 1.0000e-04
Epoch 54/100
71/71 [==============================] - 4s 58ms/step - loss: 0.2503 - accuracy: 0.9997 - val_loss: 0.1488 - val_accuracy: 0.9937 - lr: 1.0000e-04
Epoch 55/100
71/71 [==============================] - 4s 59ms/step - loss: 0.2483 - accuracy: 1.0000 - val_loss: 0.1486 - val_accuracy: 0.9927 - lr: 1.0000e-04
Epoch 56/100
71/71 [==============================] - 4s 58ms/step - loss: 0.2472 - accuracy: 0.9999 - val_loss: 0.1450 - val_accuracy: 0.9923 - lr: 1.0000e-04
Epoch 57/100
71/71 [==============================] - 4s 58ms/step - loss: 0.2448 - accuracy: 0.9996 - val_loss: 0.1458 - val_accuracy: 0.9927 - lr: 1.0000e-04
Epoch 58/100
71/71 [==============================] - 4s 58ms/step - loss: 0.2438 - accuracy: 0.9998 - val_loss: 0.1466 - val_accuracy: 0.9933 - lr: 1.0000e-04
Epoch 59/100
71/71 [==============================] - 4s 58ms/step - loss: 0.2416 - accuracy: 1.0000 - val_loss: 0.1441 - val_accuracy: 0.9923 - lr: 1.0000e-04
Epoch 60/100
71/71 [==============================] - 4s 59ms/step - loss: 0.2380 - accuracy: 1.0000 - val_loss: 0.1464 - val_accuracy: 0.9937 - lr: 1.0000e-04
Epoch 61/100
71/71 [==============================] - 4s 59ms/step - loss: 0.2382 - accuracy: 0.9999 - val_loss: 0.1415 - val_accuracy: 0.9943 - lr: 1.0000e-04
Epoch 62/100
71/71 [==============================] - 4s 58ms/step - loss: 0.2361 - accuracy: 1.0000 - val_loss: 0.1417 - val_accuracy: 0.9927 - lr: 1.0000e-04
Epoch 63/100
71/71 [==============================] - 4s 58ms/step - loss: 0.2348 - accuracy: 0.9999 - val_loss: 0.1453 - val_accuracy: 0.9937 - lr: 1.0000e-04
Epoch 64/100
71/71 [==============================] - 4s 58ms/step - loss: 0.2326 - accuracy: 1.0000 - val_loss: 0.1366 - val_accuracy: 0.9933 - lr: 1.0000e-04
Epoch 65/100
71/71 [==============================] - 4s 58ms/step - loss: 0.2306 - accuracy: 1.0000 - val_loss: 0.1398 - val_accuracy: 0.9930 - lr: 1.0000e-04
Epoch 66/100
71/71 [==============================] - 4s 58ms/step - loss: 0.2300 - accuracy: 1.0000 - val_loss: 0.1386 - val_accuracy: 0.9930 - lr: 1.0000e-04
Epoch 67/100
71/71 [==============================] - 4s 59ms/step - loss: 0.2290 - accuracy: 0.9998 - val_loss: 0.1383 - val_accuracy: 0.9920 - lr: 1.0000e-04
Epoch 68/100
71/71 [==============================] - 4s 59ms/step - loss: 0.2279 - accuracy: 0.9998 - val_loss: 0.1431 - val_accuracy: 0.9913 - lr: 1.0000e-04
Epoch 69/100
71/71 [==============================] - 4s 58ms/step - loss: 0.2256 - accuracy: 0.9999 - val_loss: 0.1328 - val_accuracy: 0.9920 - lr: 1.0000e-04
Epoch 70/100
71/71 [==============================] - 4s 58ms/step - loss: 0.2258 - accuracy: 0.9998 - val_loss: 0.1388 - val_accuracy: 0.9903 - lr: 1.0000e-04
Epoch 71/100
71/71 [==============================] - 4s 59ms/step - loss: 0.2229 - accuracy: 0.9999 - val_loss: 0.1283 - val_accuracy: 0.9927 - lr: 1.0000e-04
94/94 [==============================] - 1s 5ms/step
Cohen’s Kappa Score: 0.9914285714285714

Analysing of Graphs¶

In [47]:
for fig in figures_cutmix:
    fig()

Analysing of Results¶

In [48]:
overall.iloc[-3:]
Out[48]:
Model Name Epochs Batch Size Train Loss Test Loss Train Acc Test Acc Kappa Comments
6 CNN cutmix 42 128 0.164451 0.178881 0.998892 0.945667 0.951071 NaN
7 CNN2 cutmix 52 128 0.157783 0.055629 0.999335 0.992333 0.990714 NaN
8 VGG_Baseline cutmix 71 128 0.238155 0.141491 0.999889 0.994333 0.991429 NaN

CutOut¶

  • Cutout image augmentation is to randomly remove a square region of pixels in an input image during training image.png
In [49]:
def cutout(image, n_holes, length):
    height = tf.shape(image)[0]
    width = tf.shape(image)[1]

    mask = tf.ones((height, width), dtype=tf.float32)

    for _ in range(n_holes):
        y = tf.random.uniform([], 0, height, dtype=tf.int32)
        x = tf.random.uniform([], 0, width, dtype=tf.int32)

        y1 = tf.cast(tf.clip_by_value(y - length // 2, 0, height), tf.int32)
        y2 = tf.cast(tf.clip_by_value(y + length // 2, 0, height), tf.int32)
        x1 = tf.cast(tf.clip_by_value(x - length // 2, 0, width), tf.int32)
        x2 = tf.cast(tf.clip_by_value(x + length // 2, 0, width), tf.int32)

        y_mask = (tf.range(height)[:, None] >= y1) & (tf.range(height)[:, None] < y2)
        x_mask = (tf.range(width)[None, :] >= x1) & (tf.range(width)[None, :] < x2)
        cutout_mask = y_mask & x_mask

        mask = tf.where(cutout_mask, tf.zeros_like(mask), mask)

    mask = tf.expand_dims(mask, -1)  # Expanding to 3D to match image dimensions
    mask = tf.tile(mask, [1, 1, tf.shape(image)[-1]])  # Adjusting channels to match the image

    return image * mask

Set up the train and validation data¶

In [50]:
train_ds_one = tf.data.Dataset.from_tensor_slices((X_train_big, y_train_big)).shuffle(2048).map(preprocess_data, num_parallel_calls=tf.data.AUTOTUNE)
train_ds_two = tf.data.Dataset.from_tensor_slices((X_train_big, y_train_big)).shuffle(2048).map(preprocess_data, num_parallel_calls=tf.data.AUTOTUNE)

val_ds_cutout = tf.data.Dataset.from_tensor_slices((X_val_big, y_val_big))
val_ds_cutout = val_ds_cutout.map(preprocess_data, num_parallel_calls=tf.data.AUTOTUNE)
val_ds_cutout = val_ds_cutout.map(lambda x, y: (cutout(x, n_holes=1, length=16), y), num_parallel_calls=tf.data.AUTOTUNE)
val_ds_cutout = val_ds_cutout.batch(128).prefetch(tf.data.AUTOTUNE)

train_ds_cutout = train_ds_one.map(lambda x, y: (cutout(x, n_holes=1, length=16), y), num_parallel_calls=tf.data.AUTOTUNE)
train_ds_cutout = train_ds_cutout.batch(128).prefetch(tf.data.AUTOTUNE)

image_batch, label_batch = next(iter(train_ds_cutout))

Visualisation of the augmented dataset¶

In [51]:
plt.figure(figsize=(10, 10))
for i in range(15):
    ax = plt.subplot(3, 5, i + 1)
    plt.title(labels_dict[np.argmax(label_batch[i])])
    plt.imshow(tf.squeeze(image_batch[i]), cmap="gray")
    plt.axis("off")
plt.show()

Running the models¶

In [52]:
models = models_array()
figures_cutout = []

for i in range(len(models)):
    print(f'Running {model_names[i]}')
    results, fig = evaluator.model_evaluate( train_ds_cutout, val_ds_cutout , models[i], base_hparams)
    results['Model Name'] = f'{model_names[i]} cutout'
    y_pred = models[i].predict(X_test_big)
    y_pred_classes = np.argmax(y_pred, axis=1)
    
    kappa = cohen_kappa_score(y_test_big, y_pred_classes)
    print("Cohen’s Kappa Score:", kappa)
    results['Kappa'] = kappa
    overall = pd.concat([overall, pd.DataFrame([results])], ignore_index=True)
    figures_cutout.append(fig)
    
Model: "sequential_16"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_11 (Conv2D)          (None, 128, 128, 32)      320       
                                                                 
 batch_normalization_11 (Bat  (None, 128, 128, 32)     128       
 chNormalization)                                                
                                                                 
 re_lu_11 (ReLU)             (None, 128, 128, 32)      0         
                                                                 
 conv2d_12 (Conv2D)          (None, 128, 128, 64)      18496     
                                                                 
 batch_normalization_12 (Bat  (None, 128, 128, 64)     256       
 chNormalization)                                                
                                                                 
 re_lu_12 (ReLU)             (None, 128, 128, 64)      0         
                                                                 
 conv2d_13 (Conv2D)          (None, 128, 128, 128)     73856     
                                                                 
 batch_normalization_13 (Bat  (None, 128, 128, 128)    512       
 chNormalization)                                                
                                                                 
 re_lu_13 (ReLU)             (None, 128, 128, 128)     0         
                                                                 
 conv2d_14 (Conv2D)          (None, 128, 128, 128)     147584    
                                                                 
 batch_normalization_14 (Bat  (None, 128, 128, 128)    512       
 chNormalization)                                                
                                                                 
 re_lu_14 (ReLU)             (None, 128, 128, 128)     0         
                                                                 
 conv2d_15 (Conv2D)          (None, 128, 128, 128)     147584    
                                                                 
 batch_normalization_15 (Bat  (None, 128, 128, 128)    512       
 chNormalization)                                                
                                                                 
 re_lu_15 (ReLU)             (None, 128, 128, 128)     0         
                                                                 
 conv2d_16 (Conv2D)          (None, 128, 128, 128)     147584    
                                                                 
 batch_normalization_16 (Bat  (None, 128, 128, 128)    512       
 chNormalization)                                                
                                                                 
 re_lu_16 (ReLU)             (None, 128, 128, 128)     0         
                                                                 
 global_average_pooling2d_1   (None, 128)              0         
 (GlobalAveragePooling2D)                                        
                                                                 
 dense_1 (Dense)             (None, 256)               33024     
                                                                 
 dense_2 (Dense)             (None, 15)                3855      
                                                                 
=================================================================
Total params: 574,735
Trainable params: 573,519
Non-trainable params: 1,216
_________________________________________________________________
Model: "CNN_Baseline"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_17 (Conv2D)          (None, 128, 128, 64)      640       
                                                                 
 batch_normalization_17 (Bat  (None, 128, 128, 64)     256       
 chNormalization)                                                
                                                                 
 max_pooling2d_4 (MaxPooling  (None, 64, 64, 64)       0         
 2D)                                                             
                                                                 
 dropout_1 (Dropout)         (None, 64, 64, 64)        0         
                                                                 
 conv2d_18 (Conv2D)          (None, 64, 64, 128)       73856     
                                                                 
 batch_normalization_18 (Bat  (None, 64, 64, 128)      512       
 chNormalization)                                                
                                                                 
 max_pooling2d_5 (MaxPooling  (None, 32, 32, 128)      0         
 2D)                                                             
                                                                 
 dropout_2 (Dropout)         (None, 32, 32, 128)       0         
                                                                 
 conv2d_19 (Conv2D)          (None, 32, 32, 256)       295168    
                                                                 
 batch_normalization_19 (Bat  (None, 32, 32, 256)      1024      
 chNormalization)                                                
                                                                 
 max_pooling2d_6 (MaxPooling  (None, 16, 16, 256)      0         
 2D)                                                             
                                                                 
 dropout_3 (Dropout)         (None, 16, 16, 256)       0         
                                                                 
 flatten (Flatten)           (None, 65536)             0         
                                                                 
 dense_3 (Dense)             (None, 512)               33554944  
                                                                 
 batch_normalization_20 (Bat  (None, 512)              2048      
 chNormalization)                                                
                                                                 
 dropout_4 (Dropout)         (None, 512)               0         
                                                                 
 dense_4 (Dense)             (None, 128)               65664     
                                                                 
 batch_normalization_21 (Bat  (None, 128)              512       
 chNormalization)                                                
                                                                 
 dropout_5 (Dropout)         (None, 128)               0         
                                                                 
 dense_5 (Dense)             (None, 15)                1935      
                                                                 
=================================================================
Total params: 33,996,559
Trainable params: 33,994,383
Non-trainable params: 2,176
_________________________________________________________________
Running CNN
Epoch 1/100
71/71 [==============================] - 5s 57ms/step - loss: 2.2031 - accuracy: 0.3745 - val_loss: 6.4476 - val_accuracy: 0.1113 - lr: 0.0010
Epoch 2/100
71/71 [==============================] - 4s 50ms/step - loss: 1.2053 - accuracy: 0.6276 - val_loss: 2.0152 - val_accuracy: 0.3620 - lr: 0.0010
Epoch 3/100
71/71 [==============================] - 4s 50ms/step - loss: 0.8698 - accuracy: 0.7292 - val_loss: 1.0972 - val_accuracy: 0.6617 - lr: 0.0010
Epoch 4/100
71/71 [==============================] - 3s 49ms/step - loss: 0.6079 - accuracy: 0.8074 - val_loss: 1.3297 - val_accuracy: 0.5957 - lr: 0.0010
Epoch 5/100
71/71 [==============================] - 3s 48ms/step - loss: 0.4422 - accuracy: 0.8634 - val_loss: 1.4359 - val_accuracy: 0.6093 - lr: 0.0010
Epoch 6/100
71/71 [==============================] - 4s 50ms/step - loss: 0.3258 - accuracy: 0.9018 - val_loss: 0.8552 - val_accuracy: 0.7497 - lr: 0.0010
Epoch 7/100
71/71 [==============================] - 4s 50ms/step - loss: 0.2399 - accuracy: 0.9286 - val_loss: 0.7387 - val_accuracy: 0.7867 - lr: 0.0010
Epoch 8/100
71/71 [==============================] - 4s 50ms/step - loss: 0.1641 - accuracy: 0.9529 - val_loss: 0.4909 - val_accuracy: 0.8527 - lr: 0.0010
Epoch 9/100
71/71 [==============================] - 3s 49ms/step - loss: 0.1443 - accuracy: 0.9599 - val_loss: 0.6127 - val_accuracy: 0.8147 - lr: 0.0010
Epoch 10/100
71/71 [==============================] - 3s 49ms/step - loss: 0.1087 - accuracy: 0.9712 - val_loss: 1.0661 - val_accuracy: 0.7373 - lr: 0.0010
Epoch 11/100
71/71 [==============================] - 4s 50ms/step - loss: 0.0857 - accuracy: 0.9766 - val_loss: 0.4850 - val_accuracy: 0.8633 - lr: 0.0010
Epoch 12/100
71/71 [==============================] - 3s 49ms/step - loss: 0.0872 - accuracy: 0.9744 - val_loss: 0.9205 - val_accuracy: 0.7743 - lr: 0.0010
Epoch 13/100
71/71 [==============================] - 3s 49ms/step - loss: 0.0644 - accuracy: 0.9836 - val_loss: 0.5137 - val_accuracy: 0.8530 - lr: 0.0010
Epoch 14/100
71/71 [==============================] - 3s 49ms/step - loss: 0.0765 - accuracy: 0.9788 - val_loss: 0.8142 - val_accuracy: 0.7743 - lr: 0.0010
Epoch 15/100
71/71 [==============================] - 4s 50ms/step - loss: 0.0619 - accuracy: 0.9824 - val_loss: 0.3808 - val_accuracy: 0.8967 - lr: 0.0010
Epoch 16/100
71/71 [==============================] - 3s 49ms/step - loss: 0.0512 - accuracy: 0.9869 - val_loss: 0.9921 - val_accuracy: 0.7680 - lr: 0.0010
Epoch 17/100
71/71 [==============================] - 3s 49ms/step - loss: 0.0457 - accuracy: 0.9874 - val_loss: 0.9095 - val_accuracy: 0.7747 - lr: 0.0010
Epoch 18/100
71/71 [==============================] - 4s 50ms/step - loss: 0.0446 - accuracy: 0.9872 - val_loss: 0.2453 - val_accuracy: 0.9267 - lr: 0.0010
Epoch 19/100
71/71 [==============================] - 3s 49ms/step - loss: 0.0438 - accuracy: 0.9856 - val_loss: 0.5057 - val_accuracy: 0.8583 - lr: 0.0010
Epoch 20/100
71/71 [==============================] - 3s 49ms/step - loss: 0.0485 - accuracy: 0.9846 - val_loss: 0.5884 - val_accuracy: 0.8557 - lr: 0.0010
Epoch 21/100
71/71 [==============================] - 4s 50ms/step - loss: 0.0353 - accuracy: 0.9906 - val_loss: 0.2665 - val_accuracy: 0.9320 - lr: 0.0010
Epoch 22/100
71/71 [==============================] - 3s 49ms/step - loss: 0.0335 - accuracy: 0.9909 - val_loss: 2.9167 - val_accuracy: 0.6673 - lr: 0.0010
Epoch 23/100
71/71 [==============================] - 3s 49ms/step - loss: 0.0345 - accuracy: 0.9897 - val_loss: 1.0126 - val_accuracy: 0.7557 - lr: 0.0010
Epoch 24/100
71/71 [==============================] - 3s 49ms/step - loss: 0.0264 - accuracy: 0.9928 - val_loss: 0.3340 - val_accuracy: 0.9160 - lr: 1.0000e-04
Epoch 25/100
71/71 [==============================] - 4s 50ms/step - loss: 0.0191 - accuracy: 0.9950 - val_loss: 0.2300 - val_accuracy: 0.9390 - lr: 1.0000e-04
Epoch 26/100
71/71 [==============================] - 3s 49ms/step - loss: 0.0204 - accuracy: 0.9941 - val_loss: 0.2156 - val_accuracy: 0.9383 - lr: 1.0000e-04
Epoch 27/100
71/71 [==============================] - 3s 49ms/step - loss: 0.0164 - accuracy: 0.9963 - val_loss: 0.2243 - val_accuracy: 0.9387 - lr: 1.0000e-04
Epoch 28/100
71/71 [==============================] - 4s 50ms/step - loss: 0.0153 - accuracy: 0.9962 - val_loss: 0.2185 - val_accuracy: 0.9420 - lr: 1.0000e-04
Epoch 29/100
71/71 [==============================] - 3s 49ms/step - loss: 0.0129 - accuracy: 0.9977 - val_loss: 0.3506 - val_accuracy: 0.9127 - lr: 1.0000e-04
Epoch 30/100
71/71 [==============================] - 4s 50ms/step - loss: 0.0118 - accuracy: 0.9977 - val_loss: 0.1980 - val_accuracy: 0.9447 - lr: 1.0000e-04
Epoch 31/100
71/71 [==============================] - 4s 50ms/step - loss: 0.0117 - accuracy: 0.9970 - val_loss: 0.1892 - val_accuracy: 0.9493 - lr: 1.0000e-04
Epoch 32/100
71/71 [==============================] - 3s 49ms/step - loss: 0.0113 - accuracy: 0.9975 - val_loss: 0.3271 - val_accuracy: 0.9200 - lr: 1.0000e-04
Epoch 33/100
71/71 [==============================] - 3s 49ms/step - loss: 0.0121 - accuracy: 0.9981 - val_loss: 0.2797 - val_accuracy: 0.9277 - lr: 1.0000e-04
Epoch 34/100
71/71 [==============================] - 4s 50ms/step - loss: 0.0097 - accuracy: 0.9977 - val_loss: 0.1812 - val_accuracy: 0.9517 - lr: 1.0000e-04
Epoch 35/100
71/71 [==============================] - 3s 49ms/step - loss: 0.0086 - accuracy: 0.9986 - val_loss: 0.1981 - val_accuracy: 0.9467 - lr: 1.0000e-04
Epoch 36/100
71/71 [==============================] - 3s 49ms/step - loss: 0.0108 - accuracy: 0.9979 - val_loss: 0.1902 - val_accuracy: 0.9490 - lr: 1.0000e-04
Epoch 37/100
71/71 [==============================] - 3s 49ms/step - loss: 0.0096 - accuracy: 0.9979 - val_loss: 0.1859 - val_accuracy: 0.9510 - lr: 1.0000e-04
Epoch 38/100
71/71 [==============================] - 3s 49ms/step - loss: 0.0103 - accuracy: 0.9972 - val_loss: 0.2028 - val_accuracy: 0.9457 - lr: 1.0000e-04
Epoch 39/100
71/71 [==============================] - 3s 49ms/step - loss: 0.0092 - accuracy: 0.9984 - val_loss: 0.1980 - val_accuracy: 0.9457 - lr: 1.0000e-04
Epoch 40/100
71/71 [==============================] - 3s 49ms/step - loss: 0.0083 - accuracy: 0.9982 - val_loss: 0.2020 - val_accuracy: 0.9447 - lr: 1.0000e-05
Epoch 41/100
71/71 [==============================] - 3s 49ms/step - loss: 0.0075 - accuracy: 0.9987 - val_loss: 0.1979 - val_accuracy: 0.9470 - lr: 1.0000e-05
Epoch 42/100
71/71 [==============================] - 3s 49ms/step - loss: 0.0089 - accuracy: 0.9986 - val_loss: 0.1893 - val_accuracy: 0.9477 - lr: 1.0000e-05
Epoch 43/100
71/71 [==============================] - 3s 49ms/step - loss: 0.0098 - accuracy: 0.9976 - val_loss: 0.2064 - val_accuracy: 0.9443 - lr: 1.0000e-05
Epoch 44/100
71/71 [==============================] - 4s 50ms/step - loss: 0.0067 - accuracy: 0.9990 - val_loss: 0.2109 - val_accuracy: 0.9433 - lr: 1.0000e-05
94/94 [==============================] - 1s 4ms/step
Cohen’s Kappa Score: 0.9564285714285714
Running CNN2
Epoch 1/100
71/71 [==============================] - 19s 257ms/step - loss: 1.7408 - accuracy: 0.4718 - val_loss: 3.3004 - val_accuracy: 0.1213 - lr: 0.0010
Epoch 2/100
71/71 [==============================] - 18s 255ms/step - loss: 1.0736 - accuracy: 0.6700 - val_loss: 3.5122 - val_accuracy: 0.2703 - lr: 0.0010
Epoch 3/100
71/71 [==============================] - 18s 254ms/step - loss: 0.7570 - accuracy: 0.7707 - val_loss: 4.6527 - val_accuracy: 0.2910 - lr: 0.0010
Epoch 4/100
71/71 [==============================] - 18s 255ms/step - loss: 0.5997 - accuracy: 0.8249 - val_loss: 3.6512 - val_accuracy: 0.3950 - lr: 0.0010
Epoch 5/100
71/71 [==============================] - 18s 254ms/step - loss: 0.4691 - accuracy: 0.8638 - val_loss: 5.0946 - val_accuracy: 0.3213 - lr: 0.0010
Epoch 6/100
71/71 [==============================] - 18s 255ms/step - loss: 0.3817 - accuracy: 0.8902 - val_loss: 0.6853 - val_accuracy: 0.7800 - lr: 0.0010
Epoch 7/100
71/71 [==============================] - 18s 254ms/step - loss: 0.3102 - accuracy: 0.9099 - val_loss: 1.6420 - val_accuracy: 0.6190 - lr: 0.0010
Epoch 8/100
71/71 [==============================] - 18s 254ms/step - loss: 0.2686 - accuracy: 0.9228 - val_loss: 1.3555 - val_accuracy: 0.6350 - lr: 0.0010
Epoch 9/100
71/71 [==============================] - 18s 254ms/step - loss: 0.2496 - accuracy: 0.9286 - val_loss: 2.8667 - val_accuracy: 0.3923 - lr: 0.0010
Epoch 10/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1891 - accuracy: 0.9481 - val_loss: 2.0188 - val_accuracy: 0.5933 - lr: 0.0010
Epoch 11/100
71/71 [==============================] - 18s 255ms/step - loss: 0.1767 - accuracy: 0.9513 - val_loss: 4.0301 - val_accuracy: 0.4353 - lr: 0.0010
Epoch 12/100
71/71 [==============================] - 18s 254ms/step - loss: 0.1098 - accuracy: 0.9744 - val_loss: 0.1496 - val_accuracy: 0.9623 - lr: 1.0000e-04
Epoch 13/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0893 - accuracy: 0.9836 - val_loss: 0.1377 - val_accuracy: 0.9637 - lr: 1.0000e-04
Epoch 14/100
71/71 [==============================] - 18s 254ms/step - loss: 0.0829 - accuracy: 0.9839 - val_loss: 0.1459 - val_accuracy: 0.9613 - lr: 1.0000e-04
Epoch 15/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0797 - accuracy: 0.9838 - val_loss: 0.1258 - val_accuracy: 0.9713 - lr: 1.0000e-04
Epoch 16/100
71/71 [==============================] - 18s 254ms/step - loss: 0.0762 - accuracy: 0.9862 - val_loss: 0.1224 - val_accuracy: 0.9703 - lr: 1.0000e-04
Epoch 17/100
71/71 [==============================] - 18s 254ms/step - loss: 0.0748 - accuracy: 0.9860 - val_loss: 0.1367 - val_accuracy: 0.9667 - lr: 1.0000e-04
Epoch 18/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0708 - accuracy: 0.9874 - val_loss: 0.1046 - val_accuracy: 0.9750 - lr: 1.0000e-04
Epoch 19/100
71/71 [==============================] - 18s 254ms/step - loss: 0.0701 - accuracy: 0.9865 - val_loss: 0.1063 - val_accuracy: 0.9727 - lr: 1.0000e-04
Epoch 20/100
71/71 [==============================] - 18s 254ms/step - loss: 0.0679 - accuracy: 0.9862 - val_loss: 0.1102 - val_accuracy: 0.9707 - lr: 1.0000e-04
Epoch 21/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0635 - accuracy: 0.9885 - val_loss: 0.1141 - val_accuracy: 0.9727 - lr: 1.0000e-04
Epoch 22/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0621 - accuracy: 0.9886 - val_loss: 0.0974 - val_accuracy: 0.9760 - lr: 1.0000e-04
Epoch 23/100
71/71 [==============================] - 18s 254ms/step - loss: 0.0611 - accuracy: 0.9876 - val_loss: 0.1439 - val_accuracy: 0.9617 - lr: 1.0000e-04
Epoch 24/100
71/71 [==============================] - 18s 254ms/step - loss: 0.0600 - accuracy: 0.9887 - val_loss: 0.1088 - val_accuracy: 0.9717 - lr: 1.0000e-04
Epoch 25/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0577 - accuracy: 0.9893 - val_loss: 0.1088 - val_accuracy: 0.9740 - lr: 1.0000e-04
Epoch 26/100
71/71 [==============================] - 18s 254ms/step - loss: 0.0553 - accuracy: 0.9894 - val_loss: 0.1118 - val_accuracy: 0.9700 - lr: 1.0000e-04
Epoch 27/100
71/71 [==============================] - 18s 254ms/step - loss: 0.0531 - accuracy: 0.9907 - val_loss: 0.1011 - val_accuracy: 0.9700 - lr: 1.0000e-04
Epoch 28/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0481 - accuracy: 0.9917 - val_loss: 0.0759 - val_accuracy: 0.9837 - lr: 1.0000e-05
Epoch 29/100
71/71 [==============================] - 18s 254ms/step - loss: 0.0452 - accuracy: 0.9932 - val_loss: 0.0736 - val_accuracy: 0.9820 - lr: 1.0000e-05
Epoch 30/100
71/71 [==============================] - 18s 254ms/step - loss: 0.0449 - accuracy: 0.9927 - val_loss: 0.0702 - val_accuracy: 0.9833 - lr: 1.0000e-05
Epoch 31/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0458 - accuracy: 0.9921 - val_loss: 0.0729 - val_accuracy: 0.9823 - lr: 1.0000e-05
Epoch 32/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0453 - accuracy: 0.9930 - val_loss: 0.0701 - val_accuracy: 0.9817 - lr: 1.0000e-05
Epoch 33/100
71/71 [==============================] - 18s 254ms/step - loss: 0.0433 - accuracy: 0.9941 - val_loss: 0.0712 - val_accuracy: 0.9833 - lr: 1.0000e-05
Epoch 34/100
71/71 [==============================] - 18s 254ms/step - loss: 0.0427 - accuracy: 0.9945 - val_loss: 0.0690 - val_accuracy: 0.9817 - lr: 1.0000e-05
Epoch 35/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0421 - accuracy: 0.9940 - val_loss: 0.0700 - val_accuracy: 0.9840 - lr: 1.0000e-05
Epoch 36/100
71/71 [==============================] - 18s 254ms/step - loss: 0.0440 - accuracy: 0.9932 - val_loss: 0.0710 - val_accuracy: 0.9820 - lr: 1.0000e-05
Epoch 37/100
71/71 [==============================] - 18s 254ms/step - loss: 0.0426 - accuracy: 0.9931 - val_loss: 0.0709 - val_accuracy: 0.9820 - lr: 1.0000e-05
Epoch 38/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0434 - accuracy: 0.9935 - val_loss: 0.0699 - val_accuracy: 0.9827 - lr: 1.0000e-05
Epoch 39/100
71/71 [==============================] - 18s 254ms/step - loss: 0.0435 - accuracy: 0.9937 - val_loss: 0.0703 - val_accuracy: 0.9823 - lr: 1.0000e-05
Epoch 40/100
71/71 [==============================] - 18s 254ms/step - loss: 0.0424 - accuracy: 0.9944 - val_loss: 0.0698 - val_accuracy: 0.9833 - lr: 1.0000e-06
Epoch 41/100
71/71 [==============================] - 18s 254ms/step - loss: 0.0419 - accuracy: 0.9936 - val_loss: 0.0692 - val_accuracy: 0.9830 - lr: 1.0000e-06
Epoch 42/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0436 - accuracy: 0.9932 - val_loss: 0.0694 - val_accuracy: 0.9820 - lr: 1.0000e-06
Epoch 43/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0418 - accuracy: 0.9947 - val_loss: 0.0696 - val_accuracy: 0.9830 - lr: 1.0000e-06
Epoch 44/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0431 - accuracy: 0.9932 - val_loss: 0.0690 - val_accuracy: 0.9830 - lr: 1.0000e-06
Epoch 45/100
71/71 [==============================] - 18s 255ms/step - loss: 0.0407 - accuracy: 0.9941 - val_loss: 0.0691 - val_accuracy: 0.9833 - lr: 1.0000e-07
94/94 [==============================] - 2s 16ms/step
Cohen’s Kappa Score: 0.9817857142857143
Running VGG_Baseline
Epoch 1/100
71/71 [==============================] - 6s 66ms/step - loss: 3.9192 - accuracy: 0.2463 - val_loss: 100.5357 - val_accuracy: 0.0550 - lr: 0.0100
Epoch 2/100
71/71 [==============================] - 4s 60ms/step - loss: 2.3469 - accuracy: 0.4101 - val_loss: 4.9706 - val_accuracy: 0.0717 - lr: 0.0100
Epoch 3/100
71/71 [==============================] - 4s 60ms/step - loss: 2.0371 - accuracy: 0.4834 - val_loss: 4.9399 - val_accuracy: 0.0930 - lr: 0.0100
Epoch 4/100
71/71 [==============================] - 4s 59ms/step - loss: 1.9283 - accuracy: 0.5030 - val_loss: 4.8601 - val_accuracy: 0.0710 - lr: 0.0100
Epoch 5/100
71/71 [==============================] - 4s 60ms/step - loss: 1.8067 - accuracy: 0.5429 - val_loss: 4.5363 - val_accuracy: 0.1243 - lr: 0.0100
Epoch 6/100
71/71 [==============================] - 4s 59ms/step - loss: 1.7625 - accuracy: 0.5696 - val_loss: 4.2994 - val_accuracy: 0.1277 - lr: 0.0100
Epoch 7/100
71/71 [==============================] - 4s 59ms/step - loss: 1.6848 - accuracy: 0.5963 - val_loss: 3.9251 - val_accuracy: 0.1127 - lr: 0.0100
Epoch 8/100
71/71 [==============================] - 4s 60ms/step - loss: 1.5963 - accuracy: 0.6226 - val_loss: 5.4944 - val_accuracy: 0.1310 - lr: 0.0100
Epoch 9/100
71/71 [==============================] - 4s 60ms/step - loss: 1.4555 - accuracy: 0.6700 - val_loss: 3.7085 - val_accuracy: 0.1437 - lr: 0.0100
Epoch 10/100
71/71 [==============================] - 4s 59ms/step - loss: 1.4006 - accuracy: 0.6928 - val_loss: 10.2228 - val_accuracy: 0.0697 - lr: 0.0100
Epoch 11/100
71/71 [==============================] - 4s 59ms/step - loss: 1.3113 - accuracy: 0.7192 - val_loss: 8.5290 - val_accuracy: 0.0763 - lr: 0.0100
Epoch 12/100
71/71 [==============================] - 4s 59ms/step - loss: 1.2486 - accuracy: 0.7275 - val_loss: 8.1643 - val_accuracy: 0.1203 - lr: 0.0100
Epoch 13/100
71/71 [==============================] - 4s 59ms/step - loss: 1.1755 - accuracy: 0.7584 - val_loss: 8.2964 - val_accuracy: 0.1103 - lr: 0.0100
Epoch 14/100
71/71 [==============================] - 4s 59ms/step - loss: 1.1242 - accuracy: 0.7686 - val_loss: 4.0917 - val_accuracy: 0.1803 - lr: 0.0100
Epoch 15/100
71/71 [==============================] - 4s 59ms/step - loss: 0.8142 - accuracy: 0.8638 - val_loss: 3.1275 - val_accuracy: 0.2473 - lr: 1.0000e-03
Epoch 16/100
71/71 [==============================] - 4s 59ms/step - loss: 0.6776 - accuracy: 0.8889 - val_loss: 2.2424 - val_accuracy: 0.3693 - lr: 1.0000e-03
Epoch 17/100
71/71 [==============================] - 4s 59ms/step - loss: 0.5991 - accuracy: 0.9032 - val_loss: 2.4636 - val_accuracy: 0.3213 - lr: 1.0000e-03
Epoch 18/100
71/71 [==============================] - 4s 59ms/step - loss: 0.5428 - accuracy: 0.9169 - val_loss: 1.7247 - val_accuracy: 0.4670 - lr: 1.0000e-03
Epoch 19/100
71/71 [==============================] - 4s 59ms/step - loss: 0.5016 - accuracy: 0.9255 - val_loss: 1.6074 - val_accuracy: 0.4960 - lr: 1.0000e-03
Epoch 20/100
71/71 [==============================] - 4s 59ms/step - loss: 0.4672 - accuracy: 0.9308 - val_loss: 1.9682 - val_accuracy: 0.4517 - lr: 1.0000e-03
Epoch 21/100
71/71 [==============================] - 4s 59ms/step - loss: 0.4274 - accuracy: 0.9410 - val_loss: 1.1978 - val_accuracy: 0.6790 - lr: 1.0000e-03
Epoch 22/100
71/71 [==============================] - 4s 59ms/step - loss: 0.4021 - accuracy: 0.9432 - val_loss: 1.1860 - val_accuracy: 0.6653 - lr: 1.0000e-03
Epoch 23/100
71/71 [==============================] - 4s 59ms/step - loss: 0.3830 - accuracy: 0.9458 - val_loss: 1.1451 - val_accuracy: 0.6733 - lr: 1.0000e-03
Epoch 24/100
71/71 [==============================] - 4s 59ms/step - loss: 0.3518 - accuracy: 0.9526 - val_loss: 1.2538 - val_accuracy: 0.6663 - lr: 1.0000e-03
Epoch 25/100
71/71 [==============================] - 4s 59ms/step - loss: 0.3296 - accuracy: 0.9607 - val_loss: 1.0292 - val_accuracy: 0.7137 - lr: 1.0000e-03
Epoch 26/100
71/71 [==============================] - 4s 59ms/step - loss: 0.3338 - accuracy: 0.9561 - val_loss: 0.8417 - val_accuracy: 0.7867 - lr: 1.0000e-03
Epoch 27/100
71/71 [==============================] - 4s 59ms/step - loss: 0.3019 - accuracy: 0.9634 - val_loss: 0.8196 - val_accuracy: 0.7807 - lr: 1.0000e-03
Epoch 28/100
71/71 [==============================] - 4s 59ms/step - loss: 0.3133 - accuracy: 0.9561 - val_loss: 1.0286 - val_accuracy: 0.7377 - lr: 1.0000e-03
Epoch 29/100
71/71 [==============================] - 4s 59ms/step - loss: 0.2986 - accuracy: 0.9634 - val_loss: 0.7026 - val_accuracy: 0.8097 - lr: 1.0000e-03
Epoch 30/100
71/71 [==============================] - 4s 59ms/step - loss: 0.2845 - accuracy: 0.9657 - val_loss: 0.8499 - val_accuracy: 0.7783 - lr: 1.0000e-03
Epoch 31/100
71/71 [==============================] - 4s 60ms/step - loss: 0.2689 - accuracy: 0.9684 - val_loss: 0.5115 - val_accuracy: 0.8803 - lr: 1.0000e-03
Epoch 32/100
71/71 [==============================] - 4s 59ms/step - loss: 0.2573 - accuracy: 0.9712 - val_loss: 0.7706 - val_accuracy: 0.8137 - lr: 1.0000e-03
Epoch 33/100
71/71 [==============================] - 4s 59ms/step - loss: 0.2398 - accuracy: 0.9768 - val_loss: 1.8389 - val_accuracy: 0.5563 - lr: 1.0000e-03
Epoch 34/100
71/71 [==============================] - 4s 59ms/step - loss: 0.2482 - accuracy: 0.9704 - val_loss: 0.8435 - val_accuracy: 0.7957 - lr: 1.0000e-03
Epoch 35/100
71/71 [==============================] - 4s 59ms/step - loss: 0.2505 - accuracy: 0.9696 - val_loss: 0.5964 - val_accuracy: 0.8523 - lr: 1.0000e-03
Epoch 36/100
71/71 [==============================] - 4s 59ms/step - loss: 0.2368 - accuracy: 0.9740 - val_loss: 0.8071 - val_accuracy: 0.7947 - lr: 1.0000e-03
Epoch 37/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1987 - accuracy: 0.9862 - val_loss: 0.2508 - val_accuracy: 0.9690 - lr: 1.0000e-04
Epoch 38/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1823 - accuracy: 0.9918 - val_loss: 0.2365 - val_accuracy: 0.9763 - lr: 1.0000e-04
Epoch 39/100
71/71 [==============================] - 4s 60ms/step - loss: 0.1776 - accuracy: 0.9920 - val_loss: 0.2301 - val_accuracy: 0.9777 - lr: 1.0000e-04
Epoch 40/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1719 - accuracy: 0.9947 - val_loss: 0.2199 - val_accuracy: 0.9797 - lr: 1.0000e-04
Epoch 41/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1724 - accuracy: 0.9938 - val_loss: 0.2247 - val_accuracy: 0.9777 - lr: 1.0000e-04
Epoch 42/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1692 - accuracy: 0.9949 - val_loss: 0.2228 - val_accuracy: 0.9793 - lr: 1.0000e-04
Epoch 43/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1653 - accuracy: 0.9946 - val_loss: 0.2141 - val_accuracy: 0.9803 - lr: 1.0000e-04
Epoch 44/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1662 - accuracy: 0.9946 - val_loss: 0.2149 - val_accuracy: 0.9810 - lr: 1.0000e-04
Epoch 45/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1622 - accuracy: 0.9950 - val_loss: 0.2074 - val_accuracy: 0.9827 - lr: 1.0000e-04
Epoch 46/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1597 - accuracy: 0.9957 - val_loss: 0.2066 - val_accuracy: 0.9820 - lr: 1.0000e-04
Epoch 47/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1599 - accuracy: 0.9961 - val_loss: 0.2058 - val_accuracy: 0.9823 - lr: 1.0000e-04
Epoch 48/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1564 - accuracy: 0.9966 - val_loss: 0.2037 - val_accuracy: 0.9840 - lr: 1.0000e-04
Epoch 49/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1541 - accuracy: 0.9969 - val_loss: 0.2047 - val_accuracy: 0.9817 - lr: 1.0000e-04
Epoch 50/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1518 - accuracy: 0.9975 - val_loss: 0.2042 - val_accuracy: 0.9827 - lr: 1.0000e-04
Epoch 51/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1506 - accuracy: 0.9965 - val_loss: 0.1989 - val_accuracy: 0.9823 - lr: 1.0000e-04
Epoch 52/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1494 - accuracy: 0.9968 - val_loss: 0.1993 - val_accuracy: 0.9810 - lr: 1.0000e-04
Epoch 53/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1476 - accuracy: 0.9976 - val_loss: 0.1991 - val_accuracy: 0.9827 - lr: 1.0000e-04
Epoch 54/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1478 - accuracy: 0.9966 - val_loss: 0.2084 - val_accuracy: 0.9777 - lr: 1.0000e-04
Epoch 55/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1464 - accuracy: 0.9968 - val_loss: 0.2190 - val_accuracy: 0.9747 - lr: 1.0000e-04
Epoch 56/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1429 - accuracy: 0.9976 - val_loss: 0.1886 - val_accuracy: 0.9833 - lr: 1.0000e-04
Epoch 57/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1415 - accuracy: 0.9981 - val_loss: 0.1906 - val_accuracy: 0.9817 - lr: 1.0000e-04
Epoch 58/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1412 - accuracy: 0.9975 - val_loss: 0.1832 - val_accuracy: 0.9843 - lr: 1.0000e-04
Epoch 59/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1388 - accuracy: 0.9976 - val_loss: 0.1984 - val_accuracy: 0.9827 - lr: 1.0000e-04
Epoch 60/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1373 - accuracy: 0.9973 - val_loss: 0.1901 - val_accuracy: 0.9813 - lr: 1.0000e-04
Epoch 61/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1372 - accuracy: 0.9973 - val_loss: 0.1920 - val_accuracy: 0.9823 - lr: 1.0000e-04
Epoch 62/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1344 - accuracy: 0.9980 - val_loss: 0.1873 - val_accuracy: 0.9817 - lr: 1.0000e-04
Epoch 63/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1354 - accuracy: 0.9977 - val_loss: 0.1844 - val_accuracy: 0.9840 - lr: 1.0000e-04
Epoch 64/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1336 - accuracy: 0.9978 - val_loss: 0.1784 - val_accuracy: 0.9843 - lr: 1.0000e-05
Epoch 65/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1312 - accuracy: 0.9984 - val_loss: 0.1783 - val_accuracy: 0.9850 - lr: 1.0000e-05
Epoch 66/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1311 - accuracy: 0.9984 - val_loss: 0.1776 - val_accuracy: 0.9850 - lr: 1.0000e-05
Epoch 67/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1310 - accuracy: 0.9981 - val_loss: 0.1815 - val_accuracy: 0.9840 - lr: 1.0000e-05
Epoch 68/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1303 - accuracy: 0.9981 - val_loss: 0.1792 - val_accuracy: 0.9843 - lr: 1.0000e-05
Epoch 69/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1298 - accuracy: 0.9989 - val_loss: 0.1759 - val_accuracy: 0.9847 - lr: 1.0000e-05
Epoch 70/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1295 - accuracy: 0.9992 - val_loss: 0.1749 - val_accuracy: 0.9863 - lr: 1.0000e-05
Epoch 71/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1289 - accuracy: 0.9993 - val_loss: 0.1749 - val_accuracy: 0.9853 - lr: 1.0000e-05
Epoch 72/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1293 - accuracy: 0.9993 - val_loss: 0.1742 - val_accuracy: 0.9850 - lr: 1.0000e-05
Epoch 73/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1281 - accuracy: 0.9990 - val_loss: 0.1731 - val_accuracy: 0.9853 - lr: 1.0000e-05
Epoch 74/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1298 - accuracy: 0.9988 - val_loss: 0.1752 - val_accuracy: 0.9863 - lr: 1.0000e-05
Epoch 75/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1296 - accuracy: 0.9989 - val_loss: 0.1742 - val_accuracy: 0.9850 - lr: 1.0000e-05
Epoch 76/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1304 - accuracy: 0.9978 - val_loss: 0.1731 - val_accuracy: 0.9853 - lr: 1.0000e-05
Epoch 77/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1294 - accuracy: 0.9990 - val_loss: 0.1757 - val_accuracy: 0.9857 - lr: 1.0000e-05
Epoch 78/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1297 - accuracy: 0.9986 - val_loss: 0.1767 - val_accuracy: 0.9853 - lr: 1.0000e-05
Epoch 79/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1301 - accuracy: 0.9981 - val_loss: 0.1734 - val_accuracy: 0.9853 - lr: 1.0000e-06
Epoch 80/100
71/71 [==============================] - 4s 59ms/step - loss: 0.1275 - accuracy: 0.9991 - val_loss: 0.1752 - val_accuracy: 0.9850 - lr: 1.0000e-06
94/94 [==============================] - 1s 6ms/step
Cohen’s Kappa Score: 0.9832142857142857

Analysing of Graphs¶

In [53]:
for fig in figures_cutout:
    fig()
Things Observed
    For CNN, - Training and Validation loss - For the training loss, it starts relatively high and drops quickly and continues to decrease steadily as the epochs increases - For the validation loss, similar patterns occur with the epoch increasing with really minor spikes - Training and Validation accuracy - For the training accuracy, it increases sharply at the beginning and continues to increase gradually, indicates effective lerarning - For the validation accuracy, similar pattern is seen but with a similar accuracy and minor spikes For CNN 2, - Training and Validation loss - For the training loss, it starts relatively high and drops quickly and continues to decrease steadily as the epochs increases - For the validation loss, similar patterns occur with the epoch increasing with really minor spikes - Training and Validation accuracy - For the training accuracy, it increases sharply at the beginning and continues to increase gradually, indicates effective lerarning - For the validation accuracy, similar pattern is seen but with a similar accuracy and minor spikes For VGG 16, - Training and Validation loss - For the training loss, it starts relatively high and drops quickly and continues to decrease steadily as the epochs increases - For the validation loss, similar patterns occur with the epoch increasing with really minor spikes - Training and Validation accuracy - For the training accuracy, it increases sharply at the beginning and continues to increase gradually, indicates effective lerarning - For the validation accuracy, similar pattern is seen but with a similar accuracy and minor spikes

Analysing of Result¶

In [54]:
overall.iloc[-3:]
Out[54]:
Model Name Epochs Batch Size Train Loss Test Loss Train Acc Test Acc Kappa Comments
9 CNN cutout 44 128 0.009736 0.181197 0.997674 0.951667 0.956429 NaN
10 CNN2 cutout 45 128 0.042143 0.069994 0.994019 0.984000 0.981786 NaN
11 VGG_Baseline cutout 80 128 0.129479 0.174903 0.999225 0.986333 0.983214 NaN

__Model Improvement - Add Regularisation__¶


  • Does Regularisation helps ?

Goal:

  • Try and reduce the adjusted loss function and prevent overfitting or underfitting
  • Improvement of model

CNN Baseline Updated¶

  • Added l2 regularisation
In [55]:
def cnn_lr(optimizer='adam', name='CNN_Baseline_LR', l2_reg=0.01):
    model = Sequential(name=name)

    # Input layer
    model.add(Input(shape=(128, 128, 1)))
    
   # First Convolutional Block
    model.add(Conv2D(64, (3, 3), padding='same', activation='relu', kernel_regularizer=l2(l2_reg)))
    model.add(BatchNormalization())
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.3))
    
   # Second Convolutional Block
    model.add(Conv2D(128, (3, 3), padding='same', activation='relu', kernel_regularizer=l2(l2_reg)))
    model.add(BatchNormalization())
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.3))
    
   # Third Convolutional Block
    model.add(Conv2D(64, (3, 3), padding='same', activation='relu', kernel_regularizer=l2(l2_reg)))
    model.add(BatchNormalization())
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.4))

    model.add(Flatten())
    # Dense Block with L2 
    model.add(Dense(512, activation='relu', kernel_regularizer=l2(l2_reg)))
    model.add(BatchNormalization())
    model.add(Dropout(0.5))

    model.add(Dense(128, activation='relu', kernel_regularizer=l2(l2_reg)))
    model.add(BatchNormalization())
    model.add(Dropout(0.5))

    model.add(Dense(15, activation='softmax'))

    model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])

    model.summary()

    return model
cnn_lr = cnn_lr()
Model: "CNN_Baseline_LR"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d (Conv2D)             (None, 128, 128, 64)      640       
                                                                 
 batch_normalization (BatchN  (None, 128, 128, 64)     256       
 ormalization)                                                   
                                                                 
 max_pooling2d (MaxPooling2D  (None, 64, 64, 64)       0         
 )                                                               
                                                                 
 dropout (Dropout)           (None, 64, 64, 64)        0         
                                                                 
 conv2d_1 (Conv2D)           (None, 64, 64, 128)       73856     
                                                                 
 batch_normalization_1 (Batc  (None, 64, 64, 128)      512       
 hNormalization)                                                 
                                                                 
 max_pooling2d_1 (MaxPooling  (None, 32, 32, 128)      0         
 2D)                                                             
                                                                 
 dropout_1 (Dropout)         (None, 32, 32, 128)       0         
                                                                 
 conv2d_2 (Conv2D)           (None, 32, 32, 64)        73792     
                                                                 
 batch_normalization_2 (Batc  (None, 32, 32, 64)       256       
 hNormalization)                                                 
                                                                 
 max_pooling2d_2 (MaxPooling  (None, 16, 16, 64)       0         
 2D)                                                             
                                                                 
 dropout_2 (Dropout)         (None, 16, 16, 64)        0         
                                                                 
 flatten (Flatten)           (None, 16384)             0         
                                                                 
 dense (Dense)               (None, 512)               8389120   
                                                                 
 batch_normalization_3 (Batc  (None, 512)              2048      
 hNormalization)                                                 
                                                                 
 dropout_3 (Dropout)         (None, 512)               0         
                                                                 
 dense_1 (Dense)             (None, 128)               65664     
                                                                 
 batch_normalization_4 (Batc  (None, 128)              512       
 hNormalization)                                                 
                                                                 
 dropout_4 (Dropout)         (None, 128)               0         
                                                                 
 dense_2 (Dense)             (None, 15)                1935      
                                                                 
=================================================================
Total params: 8,608,591
Trainable params: 8,606,799
Non-trainable params: 1,792
_________________________________________________________________

Running the Model¶

In [56]:
results_cnn_lr, fig_cnn_lr = evaluator.model_evaluate( train_ds, val_ds , cnn_lr, base_hparams)
Epoch 1/100
71/71 [==============================] - 5s 53ms/step - loss: 14.9827 - accuracy: 0.3425 - val_loss: 14.0496 - val_accuracy: 0.1783 - lr: 0.0010
Epoch 2/100
71/71 [==============================] - 3s 44ms/step - loss: 8.7951 - accuracy: 0.5913 - val_loss: 7.7349 - val_accuracy: 0.3440 - lr: 0.0010
Epoch 3/100
71/71 [==============================] - 3s 44ms/step - loss: 5.9010 - accuracy: 0.6977 - val_loss: 7.0035 - val_accuracy: 0.2513 - lr: 0.0010
Epoch 4/100
71/71 [==============================] - 3s 44ms/step - loss: 4.8807 - accuracy: 0.7591 - val_loss: 5.6895 - val_accuracy: 0.4397 - lr: 0.0010
Epoch 5/100
71/71 [==============================] - 3s 44ms/step - loss: 4.2563 - accuracy: 0.7941 - val_loss: 5.3168 - val_accuracy: 0.5137 - lr: 0.0010
Epoch 6/100
71/71 [==============================] - 3s 44ms/step - loss: 4.0950 - accuracy: 0.8250 - val_loss: 4.9939 - val_accuracy: 0.5263 - lr: 0.0010
Epoch 7/100
71/71 [==============================] - 3s 44ms/step - loss: 3.7052 - accuracy: 0.8537 - val_loss: 8.6494 - val_accuracy: 0.1347 - lr: 0.0010
Epoch 8/100
71/71 [==============================] - 3s 44ms/step - loss: 3.6032 - accuracy: 0.8620 - val_loss: 6.2482 - val_accuracy: 0.2923 - lr: 0.0010
Epoch 9/100
71/71 [==============================] - 3s 44ms/step - loss: 3.5958 - accuracy: 0.8683 - val_loss: 4.6657 - val_accuracy: 0.5677 - lr: 0.0010
Epoch 10/100
71/71 [==============================] - 3s 44ms/step - loss: 3.4977 - accuracy: 0.8749 - val_loss: 4.6597 - val_accuracy: 0.6000 - lr: 0.0010
Epoch 11/100
71/71 [==============================] - 3s 44ms/step - loss: 3.6819 - accuracy: 0.8721 - val_loss: 8.3454 - val_accuracy: 0.3127 - lr: 0.0010
Epoch 12/100
71/71 [==============================] - 3s 44ms/step - loss: 3.6828 - accuracy: 0.8785 - val_loss: 5.5176 - val_accuracy: 0.4833 - lr: 0.0010
Epoch 13/100
71/71 [==============================] - 3s 44ms/step - loss: 3.5147 - accuracy: 0.8906 - val_loss: 5.2196 - val_accuracy: 0.5153 - lr: 0.0010
Epoch 14/100
71/71 [==============================] - 3s 44ms/step - loss: 3.5609 - accuracy: 0.8861 - val_loss: 4.4416 - val_accuracy: 0.7063 - lr: 0.0010
Epoch 15/100
71/71 [==============================] - 3s 44ms/step - loss: 3.7102 - accuracy: 0.8878 - val_loss: 5.7496 - val_accuracy: 0.4623 - lr: 0.0010
Epoch 16/100
71/71 [==============================] - 3s 44ms/step - loss: 3.6359 - accuracy: 0.8918 - val_loss: 4.4195 - val_accuracy: 0.6877 - lr: 0.0010
Epoch 17/100
71/71 [==============================] - 3s 44ms/step - loss: 3.6130 - accuracy: 0.8927 - val_loss: 4.1744 - val_accuracy: 0.6743 - lr: 0.0010
Epoch 18/100
71/71 [==============================] - 3s 44ms/step - loss: 3.4854 - accuracy: 0.8938 - val_loss: 5.4007 - val_accuracy: 0.5643 - lr: 0.0010
Epoch 19/100
71/71 [==============================] - 3s 44ms/step - loss: 3.6936 - accuracy: 0.9025 - val_loss: 4.0829 - val_accuracy: 0.7067 - lr: 0.0010
Epoch 20/100
71/71 [==============================] - 3s 44ms/step - loss: 3.5595 - accuracy: 0.8961 - val_loss: 6.7631 - val_accuracy: 0.3923 - lr: 0.0010
Epoch 21/100
71/71 [==============================] - 3s 44ms/step - loss: 3.3630 - accuracy: 0.9093 - val_loss: 4.8026 - val_accuracy: 0.6080 - lr: 0.0010
Epoch 22/100
71/71 [==============================] - 3s 44ms/step - loss: 3.6045 - accuracy: 0.8942 - val_loss: 4.6349 - val_accuracy: 0.6700 - lr: 0.0010
Epoch 23/100
71/71 [==============================] - 3s 44ms/step - loss: 3.7745 - accuracy: 0.8954 - val_loss: 5.8354 - val_accuracy: 0.5153 - lr: 0.0010
Epoch 24/100
71/71 [==============================] - 3s 44ms/step - loss: 3.4252 - accuracy: 0.9116 - val_loss: 5.2437 - val_accuracy: 0.5150 - lr: 0.0010
Epoch 25/100
71/71 [==============================] - 3s 44ms/step - loss: 3.1529 - accuracy: 0.9425 - val_loss: 3.5722 - val_accuracy: 0.7687 - lr: 1.0000e-04
Epoch 26/100
71/71 [==============================] - 3s 44ms/step - loss: 2.6657 - accuracy: 0.9725 - val_loss: 2.5599 - val_accuracy: 0.9393 - lr: 1.0000e-04
Epoch 27/100
71/71 [==============================] - 3s 44ms/step - loss: 2.2905 - accuracy: 0.9784 - val_loss: 2.2732 - val_accuracy: 0.9280 - lr: 1.0000e-04
Epoch 28/100
71/71 [==============================] - 3s 44ms/step - loss: 1.9705 - accuracy: 0.9847 - val_loss: 1.9093 - val_accuracy: 0.9557 - lr: 1.0000e-04
Epoch 29/100
71/71 [==============================] - 3s 44ms/step - loss: 1.7019 - accuracy: 0.9880 - val_loss: 1.8302 - val_accuracy: 0.9070 - lr: 1.0000e-04
Epoch 30/100
71/71 [==============================] - 3s 44ms/step - loss: 1.4787 - accuracy: 0.9894 - val_loss: 1.4622 - val_accuracy: 0.9610 - lr: 1.0000e-04
Epoch 31/100
71/71 [==============================] - 3s 43ms/step - loss: 1.2851 - accuracy: 0.9912 - val_loss: 1.3713 - val_accuracy: 0.9360 - lr: 1.0000e-04
Epoch 32/100
71/71 [==============================] - 3s 45ms/step - loss: 1.1304 - accuracy: 0.9919 - val_loss: 1.1340 - val_accuracy: 0.9637 - lr: 1.0000e-04
Epoch 33/100
71/71 [==============================] - 3s 44ms/step - loss: 0.9971 - accuracy: 0.9918 - val_loss: 1.1236 - val_accuracy: 0.9317 - lr: 1.0000e-04
Epoch 34/100
71/71 [==============================] - 3s 44ms/step - loss: 0.8820 - accuracy: 0.9951 - val_loss: 0.9215 - val_accuracy: 0.9647 - lr: 1.0000e-04
Epoch 35/100
71/71 [==============================] - 3s 44ms/step - loss: 0.7901 - accuracy: 0.9944 - val_loss: 0.8854 - val_accuracy: 0.9493 - lr: 1.0000e-04
Epoch 36/100
71/71 [==============================] - 3s 44ms/step - loss: 0.7138 - accuracy: 0.9948 - val_loss: 0.7876 - val_accuracy: 0.9593 - lr: 1.0000e-04
Epoch 37/100
71/71 [==============================] - 3s 44ms/step - loss: 0.6501 - accuracy: 0.9950 - val_loss: 0.7119 - val_accuracy: 0.9637 - lr: 1.0000e-04
Epoch 38/100
71/71 [==============================] - 3s 44ms/step - loss: 0.6037 - accuracy: 0.9950 - val_loss: 0.6778 - val_accuracy: 0.9643 - lr: 1.0000e-04
Epoch 39/100
71/71 [==============================] - 3s 44ms/step - loss: 0.5697 - accuracy: 0.9947 - val_loss: 0.6692 - val_accuracy: 0.9560 - lr: 1.0000e-04
Epoch 40/100
71/71 [==============================] - 3s 44ms/step - loss: 0.5531 - accuracy: 0.9924 - val_loss: 0.9872 - val_accuracy: 0.8650 - lr: 1.0000e-04
Epoch 41/100
71/71 [==============================] - 3s 44ms/step - loss: 0.5371 - accuracy: 0.9930 - val_loss: 0.7393 - val_accuracy: 0.9267 - lr: 1.0000e-04
Epoch 42/100
71/71 [==============================] - 3s 44ms/step - loss: 0.5282 - accuracy: 0.9929 - val_loss: 0.6406 - val_accuracy: 0.9520 - lr: 1.0000e-04
Epoch 43/100
71/71 [==============================] - 3s 44ms/step - loss: 0.5256 - accuracy: 0.9906 - val_loss: 0.6383 - val_accuracy: 0.9567 - lr: 1.0000e-04
Epoch 44/100
71/71 [==============================] - 3s 44ms/step - loss: 0.5316 - accuracy: 0.9894 - val_loss: 0.6111 - val_accuracy: 0.9600 - lr: 1.0000e-04

Analysing the graph¶

In [57]:
fig_cnn_lr()
Things Observed
    - Training and Validation loss - For the training loss, it starts relatively high and drops quickly and continues to decrease steadily as the epochs increases - For the validation loss, similar patterns occur with the epoch increasing with really minor spikes and it is the same as valdiation loss - Training and Validation accuracy - For the training accuracy, it increases sharply at the beginning and continues to increase gradually, indicates effective lerarning - For the validation accuracy, similar pattern is seen but with a higher accuracy than train and initial spikes
In [58]:
pd.Series(results_cnn_lr)
Out[58]:
Model Name    CNN_Baseline_LR
Epochs                     44
Batch Size                128
Train Loss           0.881994
Test Loss            0.921476
Train Acc            0.995126
Test Acc             0.964667
dtype: object

Compare the difference¶

In [29]:
# pd.Series(results_cnn)

Evaluation on test set¶

  • This is to check for overfitting
  • And also how the model performs with respect to each classes
In [60]:
cls_df_cnn_lr= evaluation_test(cnn_lr, X_test_big, y_test_big, labels_dict)
cls_df_cnn_lr
94/94 [==============================] - 1s 4ms/step
Accuracy: 0.967
Out[60]:
precision recall f1-score support
Potato 0.926108 0.940 0.933002 200.000
Papaya 0.915094 0.970 0.941748 200.000
Carrot 0.968421 0.920 0.943590 200.000
Brinjal 0.950739 0.965 0.957816 200.000
Cabbage 0.929577 0.990 0.958838 200.000
Capsicum 0.942584 0.985 0.963325 200.000
Radish 0.989529 0.945 0.966752 200.000
accuracy 0.967000 0.967 0.967000 0.967
macro avg 0.967798 0.967 0.967094 3000.000
weighted avg 0.967798 0.967 0.967094 3000.000
Cauliflower 0.989637 0.955 0.972010 200.000
Cucumber 0.979695 0.965 0.972292 200.000
Tomato 0.970297 0.980 0.975124 200.000
Pumpkin 0.989796 0.970 0.979798 200.000
Broccoli 1.000000 0.965 0.982188 200.000
Bottle_Gourd 1.000000 0.970 0.984772 200.000
Bean 0.975490 0.995 0.985149 200.000
Bitter_Gourd 0.990000 0.990 0.990000 200.000
In [61]:
plot_classification_heatmap(cls_df_cnn_lr)
Things Observed
    - From the heatmap and the evaluation score, we can see that the model is 06.7% and similar to train accuracy - In addition the mdoel is best at predicting Bottle Gourd and worst at Potato in terms of f1 score

Cohen Kappa Coefficient¶

In [62]:
y_pred = cnn_lr.predict(X_test_big)
y_pred_classes = np.argmax(y_pred, axis=1)
kappa_cnn_lr = cohen_kappa_score(y_test_big, y_pred_classes)
print("Cohen’s Kappa Score:", kappa_cnn_lr)
94/94 [==============================] - 0s 4ms/step
Cohen’s Kappa Score: 0.9646428571428571
Things Observed
    - From the Cophen Kappa Coefficient, we can see that the model is very reliable
In [63]:
results_cnn_lr['Model Name'] = 'CNN_Reg'
results_cnn_lr['Kappa'] = kappa_cnn_lr
overall = pd.concat([overall, pd.DataFrame([results_cnn_lr])], ignore_index=True)
overall
Out[63]:
Model Name Epochs Batch Size Train Loss Test Loss Train Acc Test Acc Kappa Comments
0 CNN basic one 38 128 0.017855 0.623409 0.996677 0.840000 0.839286 NaN
1 CNN2 basic one 53 128 0.015596 0.062343 0.999668 0.982667 0.985000 NaN
2 VGG_Baseline basic one 63 128 0.169651 0.240200 0.996788 0.974000 0.972500 NaN
3 CNN basic two 24 128 0.228279 0.786181 0.941294 0.758333 0.745714 NaN
4 CNN2 basic two 77 128 0.038791 0.089882 0.997009 0.979000 0.979286 NaN
5 VGG_Baseline basic two 98 128 0.144072 0.220837 0.999778 0.976333 0.974643 NaN
6 CNN cutmix 42 128 0.164451 0.178881 0.998892 0.945667 0.951071 NaN
7 CNN2 cutmix 52 128 0.157783 0.055629 0.999335 0.992333 0.990714 NaN
8 VGG_Baseline cutmix 71 128 0.238155 0.141491 0.999889 0.994333 0.991429 NaN
9 CNN cutout 44 128 0.009736 0.181197 0.997674 0.951667 0.956429 NaN
10 CNN2 cutout 45 128 0.042143 0.069994 0.994019 0.984000 0.981786 NaN
11 VGG_Baseline cutout 80 128 0.129479 0.174903 0.999225 0.986333 0.983214 NaN
12 CNN_Reg 44 128 0.881994 0.921476 0.995126 0.964667 0.964643 NaN

CNN2 Baseline Updated¶

  • Added l2 regularisation
In [64]:
def CNN2_Reg(name='CNN2_Baseline', l2_reg=0.01, dropout_rate=0.5):
    filters_list = [32, 64, 128, 128, 128, 128]
    model = Sequential(name=name)

    model.add(Conv2D(filters=32, kernel_size=(3, 3), padding='same', input_shape=(128, 128, 1), kernel_regularizer=l2(l2_reg)))
    model.add(BatchNormalization())
    model.add(ReLU())
    model.add(Conv2D(filters=64, kernel_size=(3, 3), padding='same', kernel_regularizer=l2(l2_reg)))
    model.add(BatchNormalization())
    model.add(ReLU())

    model.add(Dropout(dropout_rate))

    for filters in filters_list[2:]:
        model.add(Conv2D(filters=filters, kernel_size=(3, 3), padding='same', kernel_regularizer=l2(l2_reg)))
        model.add(BatchNormalization())
        model.add(ReLU())

    model.add(GlobalAveragePooling2D())
    model.add(Dense(256, activation='relu', kernel_regularizer=l2(l2_reg)))
    model.add(Dropout(dropout_rate))
    model.add(Dense(15, activation='softmax')) 
    model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
    model.summary()
    return model
cnn2_reg = CNN2_Reg()
Model: "CNN2_Baseline"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d (Conv2D)             (None, 128, 128, 32)      320       
                                                                 
 batch_normalization (BatchN  (None, 128, 128, 32)     128       
 ormalization)                                                   
                                                                 
 re_lu (ReLU)                (None, 128, 128, 32)      0         
                                                                 
 conv2d_1 (Conv2D)           (None, 128, 128, 64)      18496     
                                                                 
 batch_normalization_1 (Batc  (None, 128, 128, 64)     256       
 hNormalization)                                                 
                                                                 
 re_lu_1 (ReLU)              (None, 128, 128, 64)      0         
                                                                 
 dropout (Dropout)           (None, 128, 128, 64)      0         
                                                                 
 conv2d_2 (Conv2D)           (None, 128, 128, 128)     73856     
                                                                 
 batch_normalization_2 (Batc  (None, 128, 128, 128)    512       
 hNormalization)                                                 
                                                                 
 re_lu_2 (ReLU)              (None, 128, 128, 128)     0         
                                                                 
 conv2d_3 (Conv2D)           (None, 128, 128, 128)     147584    
                                                                 
 batch_normalization_3 (Batc  (None, 128, 128, 128)    512       
 hNormalization)                                                 
                                                                 
 re_lu_3 (ReLU)              (None, 128, 128, 128)     0         
                                                                 
 conv2d_4 (Conv2D)           (None, 128, 128, 128)     147584    
                                                                 
 batch_normalization_4 (Batc  (None, 128, 128, 128)    512       
 hNormalization)                                                 
                                                                 
 re_lu_4 (ReLU)              (None, 128, 128, 128)     0         
                                                                 
 conv2d_5 (Conv2D)           (None, 128, 128, 128)     147584    
                                                                 
 batch_normalization_5 (Batc  (None, 128, 128, 128)    512       
 hNormalization)                                                 
                                                                 
 re_lu_5 (ReLU)              (None, 128, 128, 128)     0         
                                                                 
 global_average_pooling2d (G  (None, 128)              0         
 lobalAveragePooling2D)                                          
                                                                 
 dense (Dense)               (None, 256)               33024     
                                                                 
 dropout_1 (Dropout)         (None, 256)               0         
                                                                 
 dense_1 (Dense)             (None, 15)                3855      
                                                                 
=================================================================
Total params: 574,735
Trainable params: 573,519
Non-trainable params: 1,216
_________________________________________________________________

Running the model¶

In [72]:
results_cnn2_lr, fig_cnn2_lr = evaluator.model_evaluate( train_ds_cutmix, val_ds_cutmix , cnn2_reg, base_hparams)
Epoch 1/100
71/71 [==============================] - 19s 262ms/step - loss: 0.4528 - accuracy: 0.9846 - val_loss: 0.3354 - val_accuracy: 0.9693 - lr: 1.0000e-07
Epoch 2/100
71/71 [==============================] - 19s 261ms/step - loss: 0.4495 - accuracy: 0.9870 - val_loss: 0.3407 - val_accuracy: 0.9663 - lr: 1.0000e-07
Epoch 3/100
71/71 [==============================] - 19s 261ms/step - loss: 0.4488 - accuracy: 0.9870 - val_loss: 0.3448 - val_accuracy: 0.9640 - lr: 1.0000e-07
Epoch 4/100
71/71 [==============================] - 19s 262ms/step - loss: 0.4494 - accuracy: 0.9870 - val_loss: 0.3489 - val_accuracy: 0.9623 - lr: 1.0000e-07
Epoch 5/100
71/71 [==============================] - 19s 261ms/step - loss: 0.4510 - accuracy: 0.9873 - val_loss: 0.3508 - val_accuracy: 0.9600 - lr: 1.0000e-07
Epoch 6/100
71/71 [==============================] - 19s 261ms/step - loss: 0.4502 - accuracy: 0.9872 - val_loss: 0.3522 - val_accuracy: 0.9590 - lr: 1.0000e-07
Epoch 7/100
71/71 [==============================] - 19s 261ms/step - loss: 0.4488 - accuracy: 0.9852 - val_loss: 0.3514 - val_accuracy: 0.9593 - lr: 1.0000e-08
Epoch 8/100
71/71 [==============================] - 19s 262ms/step - loss: 0.4497 - accuracy: 0.9862 - val_loss: 0.3506 - val_accuracy: 0.9593 - lr: 1.0000e-08
Epoch 9/100
71/71 [==============================] - 19s 262ms/step - loss: 0.4514 - accuracy: 0.9858 - val_loss: 0.3507 - val_accuracy: 0.9593 - lr: 1.0000e-08
Epoch 10/100
71/71 [==============================] - 19s 262ms/step - loss: 0.4511 - accuracy: 0.9853 - val_loss: 0.3511 - val_accuracy: 0.9593 - lr: 1.0000e-08
Epoch 11/100
71/71 [==============================] - 19s 262ms/step - loss: 0.4522 - accuracy: 0.9855 - val_loss: 0.3505 - val_accuracy: 0.9593 - lr: 1.0000e-08
In [66]:
pd.Series(results_cnn2_lr)
Out[66]:
Model Name    CNN2_Baseline
Epochs                   78
Batch Size              128
Train Loss         0.307886
Test Loss          0.334669
Train Acc          0.985711
Test Acc           0.970333
dtype: object

Comparing the difference¶

In [ ]:
# pd.Series(results_cnn2)

Analysing the graph¶

In [67]:
fig_cnn2_lr()

Evaluation on test set¶

  • This is to check for overfitting
  • And also how the model performs with respect to each classes
In [68]:
cls_df_cnn2lr= evaluation_test(cnn2_reg, X_test_big, y_test_big, labels_dict)
cls_df_cnn2lr
94/94 [==============================] - 2s 16ms/step
Accuracy: 0.9693333333333334
Out[68]:
precision recall f1-score support
Cauliflower 0.903670 0.985000 0.942584 200.000000
Potato 0.949749 0.945000 0.947368 200.000000
Tomato 0.904977 1.000000 0.950119 200.000000
Radish 0.994536 0.910000 0.950392 200.000000
Papaya 0.924883 0.985000 0.953995 200.000000
Pumpkin 0.960591 0.975000 0.967742 200.000000
accuracy 0.969333 0.969333 0.969333 0.969333
weighted avg 0.970798 0.969333 0.969509 3000.000000
macro avg 0.970798 0.969333 0.969509 3000.000000
Cucumber 0.989637 0.955000 0.972010 200.000000
Bitter_Gourd 0.984615 0.960000 0.972152 200.000000
Bean 0.989744 0.965000 0.977215 200.000000
Capsicum 0.994845 0.965000 0.979695 200.000000
Broccoli 0.989796 0.970000 0.979798 200.000000
Cabbage 0.994872 0.970000 0.982278 200.000000
Brinjal 1.000000 0.970000 0.984772 200.000000
Carrot 0.990000 0.990000 0.990000 200.000000
Bottle_Gourd 0.990050 0.995000 0.992519 200.000000
In [69]:
plot_classification_heatmap(cls_df_cnn2lr)
Things Observed
    - From the heatmap and the evaluation score, we can see that the model is at 97.7% showing no overfitting - In addition, the model is best at prediction Bottle Gourd and worst at Brinjal in terms of f1 score

Cohen Kappa Coefficient¶

In [70]:
y_pred = cnn2_reg.predict(X_test_big)
y_pred_classes = np.argmax(y_pred, axis=1)
kappa_cnn2_reg = cohen_kappa_score(y_test_big, y_pred_classes)
print("Cohen’s Kappa Score:", kappa_cnn2_reg)
94/94 [==============================] - 2s 16ms/step
Cohen’s Kappa Score: 0.9671428571428572

Adding Scores to Dataframe¶

In [71]:
results_cnn2_lr['Model Name'] = 'CNN2_Reg'
results_cnn2_lr['Kappa'] = kappa_cnn2_reg
overall = pd.concat([overall, pd.DataFrame([results_cnn2_lr])], ignore_index=True)
overall
Out[71]:
Model Name Epochs Batch Size Train Loss Test Loss Train Acc Test Acc Kappa Comments
0 CNN basic one 38 128 0.017855 0.623409 0.996677 0.840000 0.839286 NaN
1 CNN2 basic one 53 128 0.015596 0.062343 0.999668 0.982667 0.985000 NaN
2 VGG_Baseline basic one 63 128 0.169651 0.240200 0.996788 0.974000 0.972500 NaN
3 CNN basic two 24 128 0.228279 0.786181 0.941294 0.758333 0.745714 NaN
4 CNN2 basic two 77 128 0.038791 0.089882 0.997009 0.979000 0.979286 NaN
5 VGG_Baseline basic two 98 128 0.144072 0.220837 0.999778 0.976333 0.974643 NaN
6 CNN cutmix 42 128 0.164451 0.178881 0.998892 0.945667 0.951071 NaN
7 CNN2 cutmix 52 128 0.157783 0.055629 0.999335 0.992333 0.990714 NaN
8 VGG_Baseline cutmix 71 128 0.238155 0.141491 0.999889 0.994333 0.991429 NaN
9 CNN cutout 44 128 0.009736 0.181197 0.997674 0.951667 0.956429 NaN
10 CNN2 cutout 45 128 0.042143 0.069994 0.994019 0.984000 0.981786 NaN
11 VGG_Baseline cutout 80 128 0.129479 0.174903 0.999225 0.986333 0.983214 NaN
12 CNN_Reg 44 128 0.881994 0.921476 0.995126 0.964667 0.964643 NaN
13 CNN2_Reg 78 128 0.307886 0.334669 0.985711 0.970333 0.967143 NaN
Things Observed
    - Overall, the best models are cnn cutmix and vgg cutmix

__Model Improvement - Learning Rate__¶


Cosine Annealing¶

  • Type of learning rate schedule that has the effect of starting with a large learning rate that is relatively rapidly decreased to a minimum value before being increased rapidly again
  • The resetting of the learning rate acts like a simulated restart of the learning process and the re-use of good weights as the starting point of the restart is referred to as a "warm restart" in contrast to a "cold restart" where a new set of small random numbers may be used as a starting point

Data used will be the original dataset

In [31]:
def lr_warmup_cosine_decay(global_step=10, warmup_steps=100, hold=0, total_steps=1000, start_lr=0.01, target_lr=1e-3):
    if total_steps <= warmup_steps:
        raise ValueError("total_steps must be greater than warmup_steps.")
    global_step = float(global_step)
    warmup_steps = float(warmup_steps)
    hold = float(hold)
    total_steps = float(total_steps)
    start_lr = float(start_lr)
    target_lr = float(target_lr)

    if global_step < warmup_steps:
        warmup_lr = start_lr + (target_lr - start_lr) * (global_step / warmup_steps)
        return warmup_lr
    elif global_step <= warmup_steps + hold:
        return target_lr
    else:
        decayed = 0.5 * (1 + np.cos(np.pi * (global_step - warmup_steps - hold) / (total_steps - warmup_steps - hold)))
        decayed_lr = target_lr + (start_lr - target_lr) * decayed
        return decayed_lr

CNN Model with special LR¶

In [34]:
def CNN1_reg(optimizer=None, name='CNN_secial_R'):
    model = Sequential(name=name)

    # Input layer
    model.add(Input(shape=(128, 128, 1)))

    # First Convolutional Block
    model.add(Conv2D(64, (3, 3), padding='same', activation='relu'))
    model.add(BatchNormalization())
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.3))

    # Second Convolutional Block
    model.add(Conv2D(128, (3, 3), padding='same', activation='relu'))
    model.add(BatchNormalization())
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.3))

    # Third Convolutional Block
    model.add(Conv2D(256, (3, 3), padding='same', activation='relu'))
    model.add(BatchNormalization())
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.4))

    model.add(Flatten())

    # Dense Block
    model.add(Dense(512, activation='relu'))
    model.add(BatchNormalization())
    model.add(Dropout(0.5))

    # Dense Block
    model.add(Dense(128, activation='relu'))
    model.add(BatchNormalization())
    model.add(Dropout(0.5))

    # Output Layer
    model.add(Dense(15, activation='softmax'))

    # Compile the model
    model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])

    model.summary()

    return model
cnn_reg = CNN1_reg(optimizer = optimizer) 

lr_schedule = tf.keras.callbacks.LearningRateScheduler(lr_warmup_cosine_decay)

initial_lr = 0.001  
optimizer = tf.keras.optimizers.Adam(learning_rate=initial_lr)

results_cnn_clr, fig_cnn_clr = evaluator.model_evaluate( train_ds_cutmix, val_ds_cutmix , cnn_reg , base_hparams,callbacks=[EarlyStopping(monitor='accuracy', patience=10, restore_best_weights=True), ReduceLROnPlateau(patience=5), lr_schedule])
Model: "CNN_secial_R"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d (Conv2D)             (None, 128, 128, 64)      640       
                                                                 
 batch_normalization (BatchN  (None, 128, 128, 64)     256       
 ormalization)                                                   
                                                                 
 max_pooling2d (MaxPooling2D  (None, 64, 64, 64)       0         
 )                                                               
                                                                 
 dropout (Dropout)           (None, 64, 64, 64)        0         
                                                                 
 conv2d_1 (Conv2D)           (None, 64, 64, 128)       73856     
                                                                 
 batch_normalization_1 (Batc  (None, 64, 64, 128)      512       
 hNormalization)                                                 
                                                                 
 max_pooling2d_1 (MaxPooling  (None, 32, 32, 128)      0         
 2D)                                                             
                                                                 
 dropout_1 (Dropout)         (None, 32, 32, 128)       0         
                                                                 
 conv2d_2 (Conv2D)           (None, 32, 32, 256)       295168    
                                                                 
 batch_normalization_2 (Batc  (None, 32, 32, 256)      1024      
 hNormalization)                                                 
                                                                 
 max_pooling2d_2 (MaxPooling  (None, 16, 16, 256)      0         
 2D)                                                             
                                                                 
 dropout_2 (Dropout)         (None, 16, 16, 256)       0         
                                                                 
 flatten (Flatten)           (None, 65536)             0         
                                                                 
 dense (Dense)               (None, 512)               33554944  
                                                                 
 batch_normalization_3 (Batc  (None, 512)              2048      
 hNormalization)                                                 
                                                                 
 dropout_3 (Dropout)         (None, 512)               0         
                                                                 
 dense_1 (Dense)             (None, 128)               65664     
                                                                 
 batch_normalization_4 (Batc  (None, 128)              512       
 hNormalization)                                                 
                                                                 
 dropout_4 (Dropout)         (None, 128)               0         
                                                                 
 dense_2 (Dense)             (None, 15)                1935      
                                                                 
=================================================================
Total params: 33,996,559
Trainable params: 33,994,383
Non-trainable params: 2,176
_________________________________________________________________
Epoch 1/100
71/71 [==============================] - 8s 67ms/step - loss: 2.1403 - accuracy: 0.3684 - val_loss: 35.7514 - val_accuracy: 0.1050 - lr: 0.0100
Epoch 2/100
71/71 [==============================] - 4s 49ms/step - loss: 1.3178 - accuracy: 0.6083 - val_loss: 13.5476 - val_accuracy: 0.0680 - lr: 0.0100
Epoch 3/100
71/71 [==============================] - 4s 50ms/step - loss: 0.9948 - accuracy: 0.7201 - val_loss: 6.1567 - val_accuracy: 0.1627 - lr: 0.0100
Epoch 4/100
71/71 [==============================] - 4s 49ms/step - loss: 0.7657 - accuracy: 0.7935 - val_loss: 7.4167 - val_accuracy: 0.1263 - lr: 0.0100
Epoch 5/100
71/71 [==============================] - 4s 50ms/step - loss: 0.6364 - accuracy: 0.8442 - val_loss: 6.2521 - val_accuracy: 0.1807 - lr: 0.0100
Epoch 6/100
71/71 [==============================] - 4s 49ms/step - loss: 0.5169 - accuracy: 0.8789 - val_loss: 4.7434 - val_accuracy: 0.2987 - lr: 0.0100
Epoch 7/100
71/71 [==============================] - 4s 50ms/step - loss: 0.4263 - accuracy: 0.9114 - val_loss: 1.5271 - val_accuracy: 0.5763 - lr: 0.0100
Epoch 8/100
71/71 [==============================] - 3s 49ms/step - loss: 0.3599 - accuracy: 0.9374 - val_loss: 7.2530 - val_accuracy: 0.1957 - lr: 0.0100
Epoch 9/100
71/71 [==============================] - 4s 49ms/step - loss: 0.3537 - accuracy: 0.9387 - val_loss: 4.0312 - val_accuracy: 0.2790 - lr: 0.0100
Epoch 10/100
71/71 [==============================] - 3s 48ms/step - loss: 0.3693 - accuracy: 0.9321 - val_loss: 13.2840 - val_accuracy: 0.1053 - lr: 0.0100
Epoch 11/100
71/71 [==============================] - 3s 48ms/step - loss: 0.3824 - accuracy: 0.9267 - val_loss: 9.1653 - val_accuracy: 0.1940 - lr: 0.0100
Epoch 12/100
71/71 [==============================] - 4s 49ms/step - loss: 0.3064 - accuracy: 0.9568 - val_loss: 6.8278 - val_accuracy: 0.1520 - lr: 9.9973e-04
Epoch 13/100
71/71 [==============================] - 4s 49ms/step - loss: 0.2629 - accuracy: 0.9711 - val_loss: 2.7199 - val_accuracy: 0.4080 - lr: 0.0100
Epoch 14/100
71/71 [==============================] - 4s 49ms/step - loss: 0.2435 - accuracy: 0.9790 - val_loss: 1.9759 - val_accuracy: 0.4857 - lr: 0.0100
Epoch 15/100
71/71 [==============================] - 4s 49ms/step - loss: 0.2354 - accuracy: 0.9798 - val_loss: 0.4452 - val_accuracy: 0.8697 - lr: 0.0100
Epoch 16/100
71/71 [==============================] - 4s 49ms/step - loss: 0.2270 - accuracy: 0.9817 - val_loss: 1.3683 - val_accuracy: 0.6100 - lr: 0.0100
Epoch 17/100
71/71 [==============================] - 4s 49ms/step - loss: 0.2120 - accuracy: 0.9868 - val_loss: 0.4104 - val_accuracy: 0.8720 - lr: 0.0100
Epoch 18/100
71/71 [==============================] - 4s 49ms/step - loss: 0.2093 - accuracy: 0.9878 - val_loss: 0.6228 - val_accuracy: 0.8217 - lr: 0.0100
Epoch 19/100
71/71 [==============================] - 4s 49ms/step - loss: 0.1987 - accuracy: 0.9909 - val_loss: 0.5716 - val_accuracy: 0.8360 - lr: 0.0100
Epoch 20/100
71/71 [==============================] - 4s 49ms/step - loss: 0.1926 - accuracy: 0.9938 - val_loss: 0.2338 - val_accuracy: 0.9330 - lr: 0.0100
Epoch 21/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1894 - accuracy: 0.9936 - val_loss: 0.6518 - val_accuracy: 0.8123 - lr: 0.0100
Epoch 22/100
71/71 [==============================] - 4s 49ms/step - loss: 0.1807 - accuracy: 0.9957 - val_loss: 0.2763 - val_accuracy: 0.9243 - lr: 0.0100
Epoch 23/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1790 - accuracy: 0.9952 - val_loss: 0.2352 - val_accuracy: 0.9270 - lr: 0.0100
Epoch 24/100
71/71 [==============================] - 4s 49ms/step - loss: 0.1759 - accuracy: 0.9968 - val_loss: 5.4875 - val_accuracy: 0.1527 - lr: 0.0100
Epoch 25/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1768 - accuracy: 0.9961 - val_loss: 0.7944 - val_accuracy: 0.7880 - lr: 9.9872e-04
Epoch 26/100
71/71 [==============================] - 4s 49ms/step - loss: 0.1720 - accuracy: 0.9978 - val_loss: 0.2200 - val_accuracy: 0.9310 - lr: 0.0100
Epoch 27/100
71/71 [==============================] - 4s 49ms/step - loss: 0.1684 - accuracy: 0.9979 - val_loss: 0.8074 - val_accuracy: 0.7620 - lr: 0.0100
Epoch 28/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1709 - accuracy: 0.9969 - val_loss: 0.2422 - val_accuracy: 0.9297 - lr: 0.0100
Epoch 29/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1665 - accuracy: 0.9973 - val_loss: 0.6504 - val_accuracy: 0.8090 - lr: 0.0100
Epoch 30/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1686 - accuracy: 0.9978 - val_loss: 0.2757 - val_accuracy: 0.9220 - lr: 0.0100
Epoch 31/100
71/71 [==============================] - 4s 49ms/step - loss: 0.1654 - accuracy: 0.9983 - val_loss: 0.2051 - val_accuracy: 0.9383 - lr: 0.0100
Epoch 32/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1856 - accuracy: 0.9925 - val_loss: 2.6279 - val_accuracy: 0.3697 - lr: 0.0100
Epoch 33/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1747 - accuracy: 0.9960 - val_loss: 0.9447 - val_accuracy: 0.7333 - lr: 0.0100
Epoch 34/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1669 - accuracy: 0.9982 - val_loss: 0.2551 - val_accuracy: 0.9240 - lr: 0.0100
Epoch 35/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1630 - accuracy: 0.9981 - val_loss: 1.5174 - val_accuracy: 0.5963 - lr: 0.0100
Epoch 36/100
71/71 [==============================] - 4s 49ms/step - loss: 0.1625 - accuracy: 0.9984 - val_loss: 0.2485 - val_accuracy: 0.9257 - lr: 9.9728e-04
Epoch 37/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1621 - accuracy: 0.9980 - val_loss: 0.9798 - val_accuracy: 0.7190 - lr: 0.0100
Epoch 38/100
71/71 [==============================] - 4s 49ms/step - loss: 0.1585 - accuracy: 0.9994 - val_loss: 1.8865 - val_accuracy: 0.5320 - lr: 0.0100
Epoch 39/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1601 - accuracy: 0.9984 - val_loss: 0.2458 - val_accuracy: 0.9317 - lr: 0.0100
Epoch 40/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1578 - accuracy: 0.9990 - val_loss: 0.2024 - val_accuracy: 0.9427 - lr: 0.0100
Epoch 41/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1588 - accuracy: 0.9989 - val_loss: 0.3869 - val_accuracy: 0.8937 - lr: 0.0100
Epoch 42/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1557 - accuracy: 0.9993 - val_loss: 1.4322 - val_accuracy: 0.6083 - lr: 0.0100
Epoch 43/100
71/71 [==============================] - 4s 49ms/step - loss: 0.1567 - accuracy: 0.9997 - val_loss: 0.3280 - val_accuracy: 0.9013 - lr: 0.0100
Epoch 44/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1532 - accuracy: 0.9992 - val_loss: 0.1957 - val_accuracy: 0.9447 - lr: 0.0100
Epoch 45/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1545 - accuracy: 0.9991 - val_loss: 1.5012 - val_accuracy: 0.5937 - lr: 0.0100
Epoch 46/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1519 - accuracy: 0.9994 - val_loss: 0.3777 - val_accuracy: 0.8927 - lr: 0.0100
Epoch 47/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1575 - accuracy: 0.9983 - val_loss: 2.9380 - val_accuracy: 0.3270 - lr: 0.0100
Epoch 48/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1544 - accuracy: 0.9990 - val_loss: 4.4248 - val_accuracy: 0.2030 - lr: 0.0100
Epoch 49/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1570 - accuracy: 0.9984 - val_loss: 0.3463 - val_accuracy: 0.8990 - lr: 9.9490e-04
Epoch 50/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1552 - accuracy: 0.9993 - val_loss: 0.2370 - val_accuracy: 0.9307 - lr: 0.0099
Epoch 51/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1518 - accuracy: 0.9996 - val_loss: 0.2609 - val_accuracy: 0.9243 - lr: 0.0099
Epoch 52/100
71/71 [==============================] - 3s 48ms/step - loss: 0.1513 - accuracy: 0.9993 - val_loss: 1.3782 - val_accuracy: 0.6037 - lr: 0.0099
Epoch 53/100
71/71 [==============================] - 4s 49ms/step - loss: 0.1497 - accuracy: 0.9997 - val_loss: 0.1967 - val_accuracy: 0.9427 - lr: 0.0099

Analyse the graph¶

In [34]:
fig_cnn_clr()

Analyse the results¶

In [35]:
pd.Series(results_cnn_clr)
Out[35]:
Model Name    CNN_secial_R
Epochs                  53
Batch Size             128
Train Loss        0.153164
Test Loss          0.19572
Train Acc         0.999225
Test Acc          0.944667
dtype: object

CNN 2 Model with special LR¶

In [32]:
def CNN2_Reg(name='CNN2_Reg',  dropout_rate=0.5, optimizer = None):
    filters_list = [32, 64, 128, 128, 128, 128]
    model = Sequential(name=name)

    model.add(Conv2D(filters=32, kernel_size=(3, 3), padding='same', input_shape=(128, 128, 1)))
    model.add(BatchNormalization())
    model.add(ReLU())
    model.add(Conv2D(filters=64, kernel_size=(3, 3), padding='same'))
    model.add(BatchNormalization())
    model.add(ReLU())

    model.add(Dropout(dropout_rate))

    for filters in filters_list[2:]:
        model.add(Conv2D(filters=filters, kernel_size=(3, 3), padding='same'))
        model.add(BatchNormalization())
        model.add(ReLU())

    model.add(GlobalAveragePooling2D())
    model.add(Dense(256, activation='relu'))
    model.add(Dropout(dropout_rate))
    model.add(Dense(15, activation='softmax')) 
    model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
    model.summary()
    return model
cnn2_reg = CNN2_Reg(optimizer = optimizer) 

lr_schedule = tf.keras.callbacks.LearningRateScheduler(lr_warmup_cosine_decay)

initial_lr = 0.001  
optimizer = tf.keras.optimizers.Adam(learning_rate=initial_lr)

results_cnn2_clr, fig_cnn2_clr = evaluator.model_evaluate( train_ds_cutmix, val_ds_cutmix , cnn2_reg , base_hparams,callbacks=[EarlyStopping(monitor='accuracy', patience=10, restore_best_weights=True), ReduceLROnPlateau(patience=5), lr_schedule])
Model: "CNN2_Reg"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_17 (Conv2D)          (None, 128, 128, 32)      320       
                                                                 
 batch_normalization_17 (Bat  (None, 128, 128, 32)     128       
 chNormalization)                                                
                                                                 
 re_lu_17 (ReLU)             (None, 128, 128, 32)      0         
                                                                 
 conv2d_18 (Conv2D)          (None, 128, 128, 64)      18496     
                                                                 
 batch_normalization_18 (Bat  (None, 128, 128, 64)     256       
 chNormalization)                                                
                                                                 
 re_lu_18 (ReLU)             (None, 128, 128, 64)      0         
                                                                 
 dropout_3 (Dropout)         (None, 128, 128, 64)      0         
                                                                 
 conv2d_19 (Conv2D)          (None, 128, 128, 128)     73856     
                                                                 
 batch_normalization_19 (Bat  (None, 128, 128, 128)    512       
 chNormalization)                                                
                                                                 
 re_lu_19 (ReLU)             (None, 128, 128, 128)     0         
                                                                 
 conv2d_20 (Conv2D)          (None, 128, 128, 128)     147584    
                                                                 
 batch_normalization_20 (Bat  (None, 128, 128, 128)    512       
 chNormalization)                                                
                                                                 
 re_lu_20 (ReLU)             (None, 128, 128, 128)     0         
                                                                 
 conv2d_21 (Conv2D)          (None, 128, 128, 128)     147584    
                                                                 
 batch_normalization_21 (Bat  (None, 128, 128, 128)    512       
 chNormalization)                                                
                                                                 
 re_lu_21 (ReLU)             (None, 128, 128, 128)     0         
                                                                 
 conv2d_22 (Conv2D)          (None, 128, 128, 128)     147584    
                                                                 
 batch_normalization_22 (Bat  (None, 128, 128, 128)    512       
 chNormalization)                                                
                                                                 
 re_lu_22 (ReLU)             (None, 128, 128, 128)     0         
                                                                 
 global_average_pooling2d_2   (None, 128)              0         
 (GlobalAveragePooling2D)                                        
                                                                 
 dense_3 (Dense)             (None, 256)               33024     
                                                                 
 dropout_4 (Dropout)         (None, 256)               0         
                                                                 
 dense_4 (Dense)             (None, 15)                3855      
                                                                 
=================================================================
Total params: 574,735
Trainable params: 573,519
Non-trainable params: 1,216
_________________________________________________________________
Epoch 1/100
71/71 [==============================] - 26s 316ms/step - loss: 2.3510 - accuracy: 0.2343 - val_loss: 74.9609 - val_accuracy: 0.0573 - lr: 0.0100
Epoch 2/100
71/71 [==============================] - 21s 297ms/step - loss: 1.9240 - accuracy: 0.3798 - val_loss: 7.0757 - val_accuracy: 0.1150 - lr: 0.0100
Epoch 3/100
71/71 [==============================] - 21s 297ms/step - loss: 1.7514 - accuracy: 0.4323 - val_loss: 7.7487 - val_accuracy: 0.0787 - lr: 0.0100
Epoch 4/100
71/71 [==============================] - 21s 297ms/step - loss: 1.6588 - accuracy: 0.4713 - val_loss: 29.4657 - val_accuracy: 0.0687 - lr: 0.0100
Epoch 5/100
71/71 [==============================] - 21s 298ms/step - loss: 1.5614 - accuracy: 0.5099 - val_loss: 12.9968 - val_accuracy: 0.1213 - lr: 0.0100
Epoch 6/100
71/71 [==============================] - 21s 297ms/step - loss: 1.4831 - accuracy: 0.5410 - val_loss: 4.7833 - val_accuracy: 0.1963 - lr: 0.0100
Epoch 7/100
71/71 [==============================] - 21s 298ms/step - loss: 1.4217 - accuracy: 0.5589 - val_loss: 21.7436 - val_accuracy: 0.0867 - lr: 0.0100
Epoch 8/100
71/71 [==============================] - 21s 298ms/step - loss: 1.3414 - accuracy: 0.5908 - val_loss: 2.3413 - val_accuracy: 0.3403 - lr: 0.0100
Epoch 9/100
71/71 [==============================] - 21s 298ms/step - loss: 1.2897 - accuracy: 0.6107 - val_loss: 45.9302 - val_accuracy: 0.0690 - lr: 0.0100
Epoch 10/100
71/71 [==============================] - 21s 298ms/step - loss: 1.2588 - accuracy: 0.6206 - val_loss: 4.4542 - val_accuracy: 0.1240 - lr: 0.0100
Epoch 11/100
71/71 [==============================] - 21s 298ms/step - loss: 1.1826 - accuracy: 0.6515 - val_loss: 2.5817 - val_accuracy: 0.3460 - lr: 0.0100
Epoch 12/100
71/71 [==============================] - 21s 298ms/step - loss: 1.1467 - accuracy: 0.6602 - val_loss: 12.7749 - val_accuracy: 0.1857 - lr: 0.0100
Epoch 13/100
71/71 [==============================] - 21s 299ms/step - loss: 1.1121 - accuracy: 0.6739 - val_loss: 7.7926 - val_accuracy: 0.3157 - lr: 9.9968e-04
Epoch 14/100
71/71 [==============================] - 21s 298ms/step - loss: 1.0367 - accuracy: 0.7005 - val_loss: 2.1474 - val_accuracy: 0.4650 - lr: 0.0100
Epoch 15/100
71/71 [==============================] - 21s 298ms/step - loss: 1.0161 - accuracy: 0.7103 - val_loss: 20.6702 - val_accuracy: 0.1043 - lr: 0.0100
Epoch 16/100
71/71 [==============================] - 21s 298ms/step - loss: 0.9726 - accuracy: 0.7273 - val_loss: 1.7594 - val_accuracy: 0.4747 - lr: 0.0100
Epoch 17/100
71/71 [==============================] - 21s 298ms/step - loss: 0.9173 - accuracy: 0.7403 - val_loss: 3.4315 - val_accuracy: 0.3623 - lr: 0.0100
Epoch 18/100
71/71 [==============================] - 21s 298ms/step - loss: 0.9086 - accuracy: 0.7468 - val_loss: 3.3259 - val_accuracy: 0.3013 - lr: 0.0100
Epoch 19/100
71/71 [==============================] - 21s 298ms/step - loss: 0.8940 - accuracy: 0.7523 - val_loss: 19.2103 - val_accuracy: 0.0710 - lr: 0.0100
Epoch 20/100
71/71 [==============================] - 21s 299ms/step - loss: 0.8507 - accuracy: 0.7692 - val_loss: 4.3068 - val_accuracy: 0.3227 - lr: 0.0100
Epoch 21/100
71/71 [==============================] - 21s 298ms/step - loss: 0.8126 - accuracy: 0.7815 - val_loss: 37.8832 - val_accuracy: 0.0953 - lr: 9.9911e-04
Epoch 22/100
71/71 [==============================] - 21s 298ms/step - loss: 0.7806 - accuracy: 0.7874 - val_loss: 4.2584 - val_accuracy: 0.3737 - lr: 0.0100
Epoch 23/100
71/71 [==============================] - 21s 297ms/step - loss: 0.7751 - accuracy: 0.7972 - val_loss: 1.8139 - val_accuracy: 0.5237 - lr: 0.0100
Epoch 24/100
71/71 [==============================] - 21s 297ms/step - loss: 0.7703 - accuracy: 0.7986 - val_loss: 17.0025 - val_accuracy: 0.2933 - lr: 0.0100
Epoch 25/100
71/71 [==============================] - 21s 298ms/step - loss: 0.7250 - accuracy: 0.8161 - val_loss: 30.8819 - val_accuracy: 0.1753 - lr: 0.0100
Epoch 26/100
71/71 [==============================] - 21s 297ms/step - loss: 0.7151 - accuracy: 0.8183 - val_loss: 0.9915 - val_accuracy: 0.7097 - lr: 0.0100
Epoch 27/100
71/71 [==============================] - 21s 297ms/step - loss: 0.6937 - accuracy: 0.8247 - val_loss: 3.8129 - val_accuracy: 0.3653 - lr: 0.0100
Epoch 28/100
71/71 [==============================] - 21s 298ms/step - loss: 0.6698 - accuracy: 0.8344 - val_loss: 2.4241 - val_accuracy: 0.4927 - lr: 0.0100
Epoch 29/100
71/71 [==============================] - 21s 297ms/step - loss: 0.6499 - accuracy: 0.8454 - val_loss: 4.8920 - val_accuracy: 0.2383 - lr: 0.0100
Epoch 30/100
71/71 [==============================] - 21s 298ms/step - loss: 0.6237 - accuracy: 0.8494 - val_loss: 1.4545 - val_accuracy: 0.6350 - lr: 0.0100
Epoch 31/100
71/71 [==============================] - 21s 297ms/step - loss: 0.6161 - accuracy: 0.8554 - val_loss: 4.6006 - val_accuracy: 0.4037 - lr: 9.9800e-04
Epoch 32/100
71/71 [==============================] - 21s 298ms/step - loss: 0.6056 - accuracy: 0.8556 - val_loss: 2.4397 - val_accuracy: 0.4427 - lr: 0.0100
Epoch 33/100
71/71 [==============================] - 21s 298ms/step - loss: 0.5744 - accuracy: 0.8673 - val_loss: 19.1468 - val_accuracy: 0.3710 - lr: 0.0100
Epoch 34/100
71/71 [==============================] - 21s 298ms/step - loss: 0.5622 - accuracy: 0.8741 - val_loss: 2.9263 - val_accuracy: 0.5177 - lr: 0.0100
Epoch 35/100
71/71 [==============================] - 21s 297ms/step - loss: 0.5324 - accuracy: 0.8912 - val_loss: 6.8771 - val_accuracy: 0.2710 - lr: 0.0100
Epoch 36/100
71/71 [==============================] - 21s 298ms/step - loss: 0.5242 - accuracy: 0.8928 - val_loss: 3.3604 - val_accuracy: 0.3943 - lr: 9.9728e-04
Epoch 37/100
71/71 [==============================] - 21s 297ms/step - loss: 0.5219 - accuracy: 0.8931 - val_loss: 5.1293 - val_accuracy: 0.4177 - lr: 0.0100
Epoch 38/100
71/71 [==============================] - 21s 298ms/step - loss: 0.5024 - accuracy: 0.8974 - val_loss: 17.4596 - val_accuracy: 0.3310 - lr: 0.0100
Epoch 39/100
71/71 [==============================] - 21s 297ms/step - loss: 0.5013 - accuracy: 0.8985 - val_loss: 12.5980 - val_accuracy: 0.1900 - lr: 0.0100
Epoch 40/100
71/71 [==============================] - 21s 298ms/step - loss: 0.5035 - accuracy: 0.8958 - val_loss: 15.6311 - val_accuracy: 0.1513 - lr: 0.0100
Epoch 41/100
71/71 [==============================] - 21s 298ms/step - loss: 0.4757 - accuracy: 0.9091 - val_loss: 1.6949 - val_accuracy: 0.6150 - lr: 9.9645e-04
Epoch 42/100
71/71 [==============================] - 21s 298ms/step - loss: 0.4623 - accuracy: 0.9118 - val_loss: 4.5378 - val_accuracy: 0.3847 - lr: 0.0100
Epoch 43/100
71/71 [==============================] - 21s 298ms/step - loss: 0.4635 - accuracy: 0.9091 - val_loss: 2.6324 - val_accuracy: 0.5700 - lr: 0.0100
Epoch 44/100
71/71 [==============================] - 21s 298ms/step - loss: 0.4494 - accuracy: 0.9167 - val_loss: 9.3926 - val_accuracy: 0.3417 - lr: 0.0100
Epoch 45/100
71/71 [==============================] - 21s 297ms/step - loss: 0.4594 - accuracy: 0.9123 - val_loss: 1.5170 - val_accuracy: 0.6760 - lr: 0.0100
Epoch 46/100
71/71 [==============================] - 21s 297ms/step - loss: 0.4258 - accuracy: 0.9229 - val_loss: 3.6279 - val_accuracy: 0.4643 - lr: 9.9551e-04
Epoch 47/100
71/71 [==============================] - 21s 298ms/step - loss: 0.4279 - accuracy: 0.9267 - val_loss: 1.5515 - val_accuracy: 0.6410 - lr: 0.0100
Epoch 48/100
71/71 [==============================] - 21s 297ms/step - loss: 0.4128 - accuracy: 0.9309 - val_loss: 2.6200 - val_accuracy: 0.6563 - lr: 0.0100
Epoch 49/100
71/71 [==============================] - 21s 297ms/step - loss: 0.4010 - accuracy: 0.9338 - val_loss: 1.3411 - val_accuracy: 0.7097 - lr: 0.0099
Epoch 50/100
71/71 [==============================] - 21s 297ms/step - loss: 0.3892 - accuracy: 0.9348 - val_loss: 2.0966 - val_accuracy: 0.5290 - lr: 0.0099
Epoch 51/100
71/71 [==============================] - 21s 297ms/step - loss: 0.3949 - accuracy: 0.9337 - val_loss: 13.3595 - val_accuracy: 0.3410 - lr: 9.9446e-04
Epoch 52/100
71/71 [==============================] - 21s 297ms/step - loss: 0.3804 - accuracy: 0.9423 - val_loss: 2.2609 - val_accuracy: 0.5503 - lr: 0.0099
Epoch 53/100
71/71 [==============================] - 21s 297ms/step - loss: 0.3976 - accuracy: 0.9312 - val_loss: 12.2866 - val_accuracy: 0.3803 - lr: 0.0099
Epoch 54/100
71/71 [==============================] - 21s 298ms/step - loss: 0.3768 - accuracy: 0.9428 - val_loss: 2.0959 - val_accuracy: 0.6230 - lr: 0.0099
Epoch 55/100
71/71 [==============================] - 21s 298ms/step - loss: 0.3726 - accuracy: 0.9455 - val_loss: 0.8176 - val_accuracy: 0.7397 - lr: 0.0099
Epoch 56/100
71/71 [==============================] - 21s 297ms/step - loss: 0.3763 - accuracy: 0.9389 - val_loss: 0.5371 - val_accuracy: 0.8350 - lr: 0.0099
Epoch 57/100
71/71 [==============================] - 21s 298ms/step - loss: 0.3564 - accuracy: 0.9503 - val_loss: 3.3384 - val_accuracy: 0.4030 - lr: 0.0099
Epoch 58/100
71/71 [==============================] - 21s 297ms/step - loss: 0.3602 - accuracy: 0.9471 - val_loss: 1.8329 - val_accuracy: 0.6067 - lr: 0.0099
Epoch 59/100
71/71 [==============================] - 21s 297ms/step - loss: 0.3504 - accuracy: 0.9513 - val_loss: 18.0323 - val_accuracy: 0.2487 - lr: 0.0099
Epoch 60/100
71/71 [==============================] - 21s 298ms/step - loss: 0.3434 - accuracy: 0.9538 - val_loss: 0.3174 - val_accuracy: 0.9133 - lr: 0.0099
Epoch 61/100
71/71 [==============================] - 21s 298ms/step - loss: 0.3355 - accuracy: 0.9559 - val_loss: 35.9723 - val_accuracy: 0.0870 - lr: 0.0099
Epoch 62/100
71/71 [==============================] - 21s 297ms/step - loss: 0.3407 - accuracy: 0.9556 - val_loss: 4.2022 - val_accuracy: 0.5183 - lr: 0.0099
Epoch 63/100
71/71 [==============================] - 21s 297ms/step - loss: 0.3575 - accuracy: 0.9476 - val_loss: 1.4461 - val_accuracy: 0.6467 - lr: 0.0099
Epoch 64/100
71/71 [==============================] - 21s 297ms/step - loss: 0.3438 - accuracy: 0.9553 - val_loss: 0.5533 - val_accuracy: 0.8570 - lr: 0.0099
Epoch 65/100
71/71 [==============================] - 21s 297ms/step - loss: 0.3289 - accuracy: 0.9586 - val_loss: 2.6462 - val_accuracy: 0.6397 - lr: 9.9094e-04
Epoch 66/100
71/71 [==============================] - 21s 297ms/step - loss: 0.3370 - accuracy: 0.9570 - val_loss: 3.0382 - val_accuracy: 0.6277 - lr: 0.0099
Epoch 67/100
71/71 [==============================] - 21s 297ms/step - loss: 0.3067 - accuracy: 0.9664 - val_loss: 0.4394 - val_accuracy: 0.8750 - lr: 0.0099
Epoch 68/100
71/71 [==============================] - 21s 297ms/step - loss: 0.3374 - accuracy: 0.9523 - val_loss: 0.6390 - val_accuracy: 0.8030 - lr: 0.0099
Epoch 69/100
71/71 [==============================] - 21s 297ms/step - loss: 0.3092 - accuracy: 0.9662 - val_loss: 9.1334 - val_accuracy: 0.3397 - lr: 0.0099
Epoch 70/100
71/71 [==============================] - 21s 297ms/step - loss: 0.3139 - accuracy: 0.9642 - val_loss: 1.9721 - val_accuracy: 0.6420 - lr: 9.8947e-04
Epoch 71/100
71/71 [==============================] - 21s 297ms/step - loss: 0.3023 - accuracy: 0.9661 - val_loss: 6.1290 - val_accuracy: 0.4450 - lr: 0.0099
Epoch 72/100
71/71 [==============================] - 21s 298ms/step - loss: 0.3141 - accuracy: 0.9636 - val_loss: 0.5771 - val_accuracy: 0.8557 - lr: 0.0099
Epoch 73/100
71/71 [==============================] - 21s 298ms/step - loss: 0.2987 - accuracy: 0.9699 - val_loss: 2.3711 - val_accuracy: 0.6177 - lr: 0.0099
Epoch 74/100
71/71 [==============================] - 21s 297ms/step - loss: 0.2987 - accuracy: 0.9662 - val_loss: 2.0595 - val_accuracy: 0.5597 - lr: 0.0099
Epoch 75/100
71/71 [==============================] - 21s 297ms/step - loss: 0.3051 - accuracy: 0.9665 - val_loss: 1.8227 - val_accuracy: 0.6483 - lr: 9.8790e-04
Epoch 76/100
71/71 [==============================] - 21s 298ms/step - loss: 0.2865 - accuracy: 0.9709 - val_loss: 0.2911 - val_accuracy: 0.9123 - lr: 0.0099
Epoch 77/100
71/71 [==============================] - 21s 298ms/step - loss: 0.2879 - accuracy: 0.9711 - val_loss: 7.9214 - val_accuracy: 0.3390 - lr: 0.0099
Epoch 78/100
71/71 [==============================] - 21s 298ms/step - loss: 0.2853 - accuracy: 0.9729 - val_loss: 5.8425 - val_accuracy: 0.4823 - lr: 0.0099
Epoch 79/100
71/71 [==============================] - 21s 298ms/step - loss: 0.2827 - accuracy: 0.9753 - val_loss: 4.8461 - val_accuracy: 0.5063 - lr: 0.0099
Epoch 80/100
71/71 [==============================] - 21s 298ms/step - loss: 0.2770 - accuracy: 0.9743 - val_loss: 0.3016 - val_accuracy: 0.9110 - lr: 0.0099
Epoch 81/100
71/71 [==============================] - 21s 298ms/step - loss: 0.2844 - accuracy: 0.9723 - val_loss: 1.1339 - val_accuracy: 0.7227 - lr: 9.8587e-04
Epoch 82/100
71/71 [==============================] - 21s 297ms/step - loss: 0.2800 - accuracy: 0.9731 - val_loss: 3.2300 - val_accuracy: 0.5413 - lr: 0.0099
Epoch 83/100
71/71 [==============================] - 21s 297ms/step - loss: 0.2761 - accuracy: 0.9753 - val_loss: 1.9819 - val_accuracy: 0.6553 - lr: 0.0099
Epoch 84/100
71/71 [==============================] - 21s 297ms/step - loss: 0.2685 - accuracy: 0.9786 - val_loss: 3.7688 - val_accuracy: 0.5373 - lr: 0.0098
Epoch 85/100
71/71 [==============================] - 21s 298ms/step - loss: 0.2762 - accuracy: 0.9746 - val_loss: 2.0127 - val_accuracy: 0.5893 - lr: 0.0098
Epoch 86/100
71/71 [==============================] - 21s 298ms/step - loss: 0.2799 - accuracy: 0.9742 - val_loss: 0.3813 - val_accuracy: 0.8827 - lr: 9.8405e-04
Epoch 87/100
71/71 [==============================] - 21s 298ms/step - loss: 0.2796 - accuracy: 0.9742 - val_loss: 0.6794 - val_accuracy: 0.8417 - lr: 0.0098
Epoch 88/100
71/71 [==============================] - 21s 298ms/step - loss: 0.2682 - accuracy: 0.9774 - val_loss: 0.3765 - val_accuracy: 0.8830 - lr: 0.0098
Epoch 89/100
71/71 [==============================] - 21s 297ms/step - loss: 0.2699 - accuracy: 0.9760 - val_loss: 2.0573 - val_accuracy: 0.6023 - lr: 0.0098
Epoch 90/100
71/71 [==============================] - 21s 297ms/step - loss: 0.2800 - accuracy: 0.9726 - val_loss: 0.9824 - val_accuracy: 0.7303 - lr: 0.0098
Epoch 91/100
71/71 [==============================] - 21s 297ms/step - loss: 0.2698 - accuracy: 0.9767 - val_loss: 0.5290 - val_accuracy: 0.8860 - lr: 9.8214e-04
Epoch 92/100
71/71 [==============================] - 21s 298ms/step - loss: 0.2588 - accuracy: 0.9802 - val_loss: 1.7468 - val_accuracy: 0.7230 - lr: 0.0098
Epoch 93/100
71/71 [==============================] - 21s 297ms/step - loss: 0.2648 - accuracy: 0.9770 - val_loss: 28.9162 - val_accuracy: 0.2887 - lr: 0.0098
Epoch 94/100
71/71 [==============================] - 21s 297ms/step - loss: 0.2708 - accuracy: 0.9755 - val_loss: 9.8529 - val_accuracy: 0.3633 - lr: 0.0098
Epoch 95/100
71/71 [==============================] - 21s 298ms/step - loss: 0.2617 - accuracy: 0.9790 - val_loss: 1.0323 - val_accuracy: 0.7523 - lr: 0.0098
Epoch 96/100
71/71 [==============================] - 21s 297ms/step - loss: 0.2777 - accuracy: 0.9735 - val_loss: 0.5969 - val_accuracy: 0.8337 - lr: 9.8011e-04
Epoch 97/100
71/71 [==============================] - 21s 298ms/step - loss: 0.2554 - accuracy: 0.9806 - val_loss: 1.3256 - val_accuracy: 0.6893 - lr: 0.0098
Epoch 98/100
71/71 [==============================] - 21s 297ms/step - loss: 0.2541 - accuracy: 0.9805 - val_loss: 0.6708 - val_accuracy: 0.7940 - lr: 0.0098
Epoch 99/100
71/71 [==============================] - 21s 298ms/step - loss: 0.2522 - accuracy: 0.9812 - val_loss: 10.5974 - val_accuracy: 0.2490 - lr: 0.0098
Epoch 100/100
71/71 [==============================] - 21s 297ms/step - loss: 0.2640 - accuracy: 0.9765 - val_loss: 7.4060 - val_accuracy: 0.4123 - lr: 0.0098

Analyse the graph¶

In [ ]:
fig_cnn2_clr()

Analyse the results¶

In [ ]:
pd.Series(results_cnn2_clr)
In [ ]:
import pandas as pd

data = {
    'Model Name': [
        'CNN basic one', 'CNN2 basic one', 'VGG_Baseline basic one', 'CNN basic two',
        'CNN2 basic two', 'VGG_Baseline basic two', 'CNN cutmix', 'CNN2 cutmix',
        'VGG_Baseline cutmix', 'CNN cutout', 'CNN2 cutout', 'VGG_Baseline cutout',
        'CNN_Reg', 'CNN2_Reg'
    ],
    'Epochs': [38, 53, 63, 24, 77, 98, 42, 52, 71, 44, 45, 80, 44, 78],
    'Batch Size': [128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128],
    'Train Loss': [0.017855, 0.015596, 0.169651, 0.228279, 0.038791, 0.144072, 0.164451,
                   0.157783, 0.238155, 0.009736, 0.042143, 0.129479, 0.881994, 0.307886],
    'Test Loss': [0.623409, 0.062343, 0.240200, 0.786181, 0.089882, 0.220837, 0.178881,
                  0.055629, 0.141491, 0.181197, 0.069994, 0.174903, 0.921476, 0.334669],
    'Train Acc': [0.996677, 0.999668, 0.996788, 0.941294, 0.997009, 0.999778, 0.998892,
                  0.999335, 0.999889, 0.997674, 0.994019, 0.999225, 0.995126, 0.985711],
    'Test Acc': [0.840000, 0.982667, 0.974000, 0.758333, 0.979000, 0.976333, 0.945667,
                 0.992333, 0.994333, 0.951667, 0.984000, 0.986333, 0.964667, 0.970333],
    'Kappa': [0.839286, 0.985000, 0.972500, 0.745714, 0.979286, 0.974643, 0.951071,
              0.990714, 0.991429, 0.956429, 0.981786, 0.983214, 0.964643, 0.967143],
    'Comments': [None, None, None, None, None, None, None, None, None, None, None, None, 'Na', 'Na']
}

overall = pd.DataFrame(overall)
print(overall)
In [ ]: